kapi

package module
v1.1.1 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Dec 11, 2024 License: GPL-3.0 Imports: 27 Imported by: 0

README

kapi

kapi provides a simplified interface to the controller-runtime library.

It significantly reduces the amount of boilerplate, code-gen and complexity required to build Kubernetes controllers, operators and define CRDs.

Features

  • Client Operations: Perform CRUD operations on Kubernetes resources using a generic client interface.
  • Custom Resource Support: Define and manage custom resources using generic types rather than code-gen.
  • Reconciliation: Add custom reconciliation logic for Kubernetes resources with support for event filtering.
  • Observability: Integrate structured logging and metrics for better observability of operations.
  • Validators: Implement validation logic to ensure resources meet specific criteria before operations.

Quick Start

For a fuller example of how to use various kapi features, see the example and consult the remainder of this README.

The snippet below shows how to define a basic reconciler for a custom resource. Some boiler plate code related to imports and error handling is omitted for brevity.

package main

import (
    // ... omitted for brevity ...
)

// Define a custom resource and its list form. Type aliases can be used to improve readability and reduce repetition of generic arguments.
type (
    ExampleResourceSpec struct {
        ExampleData string `json:"exampleData"`
    }
    // the example resource type only defines the spec, and omits status and scale using kapi.FieldUndefined
    ExampleResource     = kapi.CustomResource[ExampleResourceSpec, kapi.FieldUndefined, kapi.FieldUndefined]
    ExampleResourceList = kapi.CustomResourceList[*ExampleResource]
)

func main() {
    log, ctx := slog.New(slog.NewJSONHandler(os.Stdout, nil)), context.Background()

    // Initialize the kapi library with the logger to configure observability
    kapi.Init(kapi.UseSlog(ctx, log))

    // Create a new cluster to encapsulate the kubernetes context, by defining the namespace scope and the CRDs
    cluster, _ := kapi.NewCluster(ctx, kapi.ClusterConfig{
        Namespaces: []string{"kapi-quickstart"},
        CRDs: []kapi.CRDs{
            {
                APIGroup:   "kapi-quickstart.comradequinn.github.io",
                APIVersion: "v1",
                Kinds: map[string]kapi.KindType{
                    "ExampleResource":     &ExampleResource{},
                    "ExampleResourceList": &ExampleResourceList{},
                },
            },
        },
    })

    // Add a reconciler to the cluster to handle changes to the ExampleResource custom resource type
    kapi.AddReconciler(ctx, cluster, nil, func(ctx context.Context, evt kapi.ReconcileEventType, exampleResource *ExampleResource) error {

        // Create a client for the ExampleResource custom resource
        klient := kapi.ClientFor[*ExampleResource, *ExampleResourceList](ctx, cluster, true)

        // List all ExampleResource custom resources in the cluster
        exampleResources, _ := klient.List(ctx)

        // Log the number of ExampleResource custom resources in the cluster
        log.Info("example resource reconciled", "count", len(exampleResources.Items))

        return nil
    })

    // Connect to the cluster to begin processing events in the reconciler
    if err := cluster.Connect(ctx); err != nil {
        log.Error("failed to connect to cluster", "error", err)
    }
}

To spin up a local k8s cluster and deploy the full example, run make example.

Disclaimer

kapi is not a replacement for controller-runtime.

It is a higher level library that sits on top of controller-runtime and provides a simplified and opinionated approach to building controllers, operators and defining CRDs. As such it may not be a good fit for all use cases.

The main trade-off is its purely library orientated approach. This is in contrast to the code-gen based approach offered by kubebuilder; which is often used with controller-runtime. This approach, however, leaves manifest generation to other tools, though these are arguably better suited to that task for many use-cases (for example, AI-powered code editors such as Cursor or Co-Pilot, that can readily generate YAML directly from go struct definitions).

Installation

To install kapi, use go get:

go get github.com/comradequinn/kapi

Usage

Initialising Observability

Configure observability for all kapi clusters using the Init function.

In the following example, UseSlog is used to set up basic structured logging and metrics output using the slog package.

import (
    // ... omitted for brevity ...
)

func main() {
    ctx := context.Background()

    obs := kapi.UseSlog(ctx, slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{Level: slog.LevelDebug})))

    kapi.Init(obs)

    // use kapi...
}

In this alternative example, custom implementations for LogFunc and MetricTimerFunc are provided to integrate with other logging and metrics providers.

kapi.Init(kapi.ObservabilityConfig{
    BackgroundContext: context.Background(),
    LogFunc:           yourLogFunc,
    MetricTimerFunc:   yourMetricTimerFunc,
})
Creating a Cluster

Create a new kapi.Cluster to encapsulate the kubernetes context, by defining the namespace scope and the CRDs. If you are implementing any hooks, you will need to provide the TLS certificate location too.

cluster, _ := kapi.NewCluster(ctx, kapi.ClusterConfig{
    Namespaces: []string{"kapi-quickstart"},
    CRDs: []kapi.CRDs{
        {
            APIGroup:   "kapi.comradequinn.github.io",
            APIVersion: "v1",
            Kinds: map[string]kapi.KindType{
                "ExampleResource":     &ExampleResource{},
                "ExampleResourceList": &ExampleResourceList{},
            },
        },
    },
})
Adding Hooks

Hooks provide admission control functionality, allowing you to validate or apply defaults values to resources before CRUD operations occur. These are typically used for enforcing business rules or setting default values.

The Hook type supports several operations:

  • ValidateCreateFunc: Validates resources before creation
  • ValidateUpdateFunc: Validates resources before updates
  • ValidateDeleteFunc: Validates resources before deletion
  • DefaulterFunc: Sets default values for new resources

Any of these functions can be omitted if not required and will default to a validation success or a no-op defaulter.

In the example below, a ValidateCreateFunc is provided to ensure the ExampleData field is set:

import (
    // ... omitted for brevity ...
)

func main() {
    // ... kapi initialisation code omitted for brevity ...

    err := kapi.AddHook(ctx, cluster, &kapi.Hook[*ExampleResource]{
        ValidateCreateFunc: func(ctx context.Context, resource *ExampleResource) (warnings []string, err error) {
            if resource.Spec.ExampleData == "" {
                return nil, fmt.Errorf("example-data is required")
            }
            return nil, nil
        },
        // other validation functions and/or a defaulting function can also be provided...
    })
}
Adding a Reconciler

Add a reconciler to handle resource events for a specific resource type. The resource type itself is inferred from the argument passed to the reconcilerFunc parameter.

In this example, a filter is also applied to respond only to create events:

import (
    // ... omitted for brevity ...
)

func main() {
    // ... kapi initialisation code omitted for brevity ...

    reconcileFilterFunc := func(e kapi.ResourceEventType, o client.Object) bool {
        return e == kapi.ResourceEventCreate
    }

    err := kapi.AddReconciler(context.Background(), cluster, reconcileFilterFunc, func(ctx context.Context, eventType kapi.ReconcileEventType, resource *ExampleResource) error {
        // optionally create a client for the resource type
        klient := kapi.ClientFor[*ExampleResource, *ExampleResourceList](ctx, cluster, true)

        // perform operations using the client and another reconciler logic...

        // return nil to indicate success, an error will trigger a requeue
        return nil
    })
}
Connecting to the Cluster

Connect to the cluster to start all configured reconcilers and enable the client cache:

err = cluster.Connect(context.Background())
if err != nil {
    log.Fatalf("Failed to connect to cluster: %v", err)
}
Using the Client

The kapi.Client provides a convenient way to perform various I/O operations against resources on a Kubernetes cluster. It supports operations such as creating, updating, deleting, getting, and listing resources.

Creating a Client

To create a client for a specific resource type, use the ClientFor function. This function requires a context, a cluster, and a boolean indicating whether to use caching.

klient := kapi.ClientFor[*ExampleResource, *ExampleResourceList](ctx, cluster, true)

Caching should typically be enabled as it is more efficient. However, there can be a delay before the latest resource state is available in the cache. If your application requires the most up-to-date resource state immediately, you may need to disable caching.

Client Operations

Once you have a client, you can perform various operations:

Create a Resource

Use the Create method to add a new resource to the cluster.

exampleResource := &ExampleResource{
    Spec: ExampleResourceSpec{
        ExampleData: "initial value",
    },
}
exampleResource.Name = "example-name"
exampleResource.Namespace = "example-namespace"

err := klient.Create(ctx, exampleResource)
Get a Resource

Retrieve a specific resource using the Get method.

resource, err := klient.Get(ctx, "example-namespace", "example-name")
List Resources

List all resources of a specific type with the List method.

resources, err := klient.List(ctx)
Update a Resource

Modify an existing resource using the Update method.

// ... code to get the resource to update omitted for brevity ...

exampleResource.Spec.ExampleData = "updated value"

err = klient.Update(ctx, exampleResource)

In some cases, only subresource(s) may require updating, in which case the subresource(s) can be specified with variadic argument, as shown below.

// ... code to get the resource to update omitted for brevity ...

exampleResource.Status.Active = true // modifiy the subresource as required

err = klient.Update(ctx, exampleResource, "status") // specify that the update applies only to the subresource

Delete a Resource

Remove a resource from the cluster with the Delete method.

// ... code to get the resource to update omitted for brevity ...

err = klient.Delete(ctx, exampleResource)

Defining Custom Resources

Define custom resources using the CustomResource and CustomResourceList structs. An example is shown below:

import "github.com/comradequinn/kapi"

type (
    ExampleResource     = kapi.CustomResource[ExampleResourceSpec, kapi.FieldUndefined, kapi.FieldUndefined]
    ExampleResourceList = kapi.CustomResourceList[*ExampleResource]

    ExampleResourceSpec struct {
        ExampleData string `json:"exampleData"`
    }
)

In this example, kapi.FieldUndefined is used as a placeholder for fields that are not needed in the custom resource definition. This allows you to focus on defining only the necessary fields, such as Spec, while omitting others like Status or Scale if they are not required.

Using type aliases, like ExampleResource and ExampleResourceList in the snippet above, improves code clarity both by providing meaningful names for types and by reducing the repetition of generic type arguments.

Deployment

The lib-oriented approach of kapi allows for the definition and deployment of controllers and operators in a way that better suits existing architectures and deployment pipelines.

Deploying a kapi-based controller is simply a matter of creating a deployment in your usual manner. By default a single replica deployment model is inferred by kapi, however multiple replicas may be configured to enable high availabilty, in which case the LeaderElection field on the kapi.ClusterConfig. As shown below.

cluster, _ := kapi.NewCluster(ctx, kapi.ClusterConfig{
    Namespaces: []string{"kapi-quickstart"},
    LeaderElection: kapi.LeaderElectionConfig{
        Enabled:      true,
        LockResource: "kapi-quickstart-leader-election-lock",
    },
})

Metrics and Logging

The kapi package provides comprehensive observability through structured logging and metrics. Here's an overview of the types of metrics and logs emitted:

Logging
  • Log Levels:

    • Level 0: Error logs, indicating critical issues that need immediate attention.
    • Level 1: Warning logs, highlighting potential issues or important events.
    • Level 2: Info logs, providing general information about the application's operation.
    • Level 3: Debug logs, offering detailed insights for troubleshooting and development.
  • Log Messages:

    • Client Operations: Logs are emitted for each CRUD operation (create, update, delete, get, list) performed by the Client with details about the resource type and action. These can be identified with type=kapi_client_summary or type=kapi_client_trace; with the latter also containing additional trace information.
    • Hook Events: Logs are generated when hooks are triggered, including validation results and any defaults applied. These can be identified with type=kapi_hook_summary or type=kapi_hook_trace; with the latter also containing additional trace information.
    • Reconciler Events: Logs are generated when a reconciler is invoked, including the resource name, type, and event type (created, updated, deleted). These can be identified with type=kapi_reconciler_summary or type=kapi_reconciler_trace; with the latter also containing additional trace information.
    • Cluster Operations: Logs are produced during cluster creation and connection, detailing the namespaces and CRDs involved.
Metrics
  • Metric Timers:
    • kapi_client: Measures the duration of client operations, providing insights into the performance of CRUD actions.
    • kapi_hook: Tracks the execution time of hook operations, including validation and default value application.
    • kapi_new_cluster: Tracks the time taken to create a new cluster, helping identify potential bottlenecks in cluster initialization.
    • kapi_connect: Monitors the time required to connect to a cluster, ensuring efficient startup of reconcilers.
    • kapi_add_reconciler: Captures the time spent adding a reconciler, useful for understanding the setup overhead.
    • kapi_reconcile: Records the duration of reconciliation processes, aiding in performance analysis of resource event handling.
Correlation

Each log entry includes a correlation_id to trace and correlate events across different components and operations, enhancing the ability to diagnose issues and understand system behaviour.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

Documentation

Index

Constants

This section is empty.

Variables

View Source
var (
	ResourceEventTypeCreated = ResourceEventType(0)
	ResourceEventTypeUpdated = ResourceEventType(1)
	ResourceEventTypeDeleted = ResourceEventType(2)
)
View Source
var (
	ReconcileEventTypeCreatedOrUpdated = ReconcileEventType(0)
	ReconcileEventTypeDeleted          = ReconcileEventType(1)
)

Functions

func AddHook

func AddHook[T client.Object](ctx context.Context, cluster *Cluster, hook *Hook[T]) error

AddHook registers a hook with the provided cluster.

Hooks defines admission control functionality for validating and applying default values to resources before CRUD operations occur.

func AddReconciler

func AddReconciler[T client.Object](ctx context.Context, cluster *Cluster, reconcilerFilterFunc ReconcilerFilterFunc, reconcilerFunc ReconcilerFunc[T]) error

AddReconciler causes the specifed ReconcilerFunc to be invoked whenever any resource of type T in the specifed cluster is subject to a modification event: create, update or delete.

An eventFilterFunc can be provided that is used to reduce the scope of the reconciler. The eventFilterFunc is invoked before the ReconcilerFunc and is passed the object being modified and the type of the modification. The reconcilerFunc is only invoked if the eventFilterFunc returns true.

A nil filterFunc value matches all events.

func Init

func Init(cfg ObservabilityConfig)

Init configures observability for all kapi.Clusters

Types

type CRDs

type CRDs struct {
	APIGroup   string
	APIVersion string
	Kinds      map[string]KindType
}

CRDs defines a mapping between a set of one or more structs that each represent a CRD and the k8s API Group and Version that they are defined within

type Client

type Client[TItem client.Object, TList client.ObjectList] struct {
	// contains filtered or unexported fields
}

Client can be used to perform various IO operations against resources on a k8s cluster

func ClientFor

func ClientFor[TItem client.Object, TList client.ObjectList](ctx context.Context, cluster *Cluster, cache bool) *Client[TItem, TList]

ClientFor returns a Client that can be used to perform various IO operations against resources on a k8s cluster

If cache is true, the client will use the cache to store and retrieve resources. This is more efficient and should be used by default.

However, as there can be a delay before the latest resource state is available in the cache, some clients may need to disable it in order to retrieve the latest resource state from the cluster.

func (*Client[TItem, TList]) Create

func (c *Client[TItem, TList]) Create(ctx context.Context, resource TItem) error

Create creates a resource on the k8s cluster

func (*Client[TItem, TList]) Delete

func (c *Client[TItem, TList]) Delete(ctx context.Context, resource TItem) error

Delete removes a resource from the k8s cluster

func (*Client[TItem, TList]) Get

func (c *Client[TItem, TList]) Get(ctx context.Context, namespace, name string) (TItem, error)

Get returns data describing the specified resource

func (*Client[TItem, TList]) List

func (c *Client[TItem, TList]) List(ctx context.Context) (TList, error)

List returns data describing all occurences of the resource type associated with the client

func (*Client[TItem, TList]) Update

func (c *Client[TItem, TList]) Update(ctx context.Context, resource TItem, subresources ...Subresource) error

Update modifies a resource on the k8s cluster. Optionally, specific subresources can be provided, which will limit updates to only those subresources

type Cluster

type Cluster struct {
	// contains filtered or unexported fields
}

Cluster represents a k8s cluster. A kapi.Cluster is used to - configure one or more ReconcilerFuncs that are executed when specified k8s cluster resource-change events occur - access a `client` that can be used to perform resource level CRUD operations against a k8s cluster

func NewCluster

func NewCluster(ctx context.Context, cfg ClusterConfig) (*Cluster, error)

NewCluster returns a new kapi.Cluster based on the passed kapi.ClusterConfig

Multiple kapi.Clusters can be created to manage different configurations. However, it is preferable and typical to re-use the same kapi.Cluster where possible as this improves cache efficiency.

For example, where possible, create one kapi.Cluster and add two Reconcilers as opposed to creating two kapi.Clusters with one Reconciler attached to each

func (*Cluster) Connect

func (cluster *Cluster) Connect(ctx context.Context) error

Connect starts all configured Reconcilers and enables the use of Clients

type ClusterConfig

type ClusterConfig struct {
	// TLS defines the directory in which the TLS certificates to use when serving any configured hooks are stored
	TLS string
	// DisableCaching disables caching of cluster information locally.
	//
	// This canis typically used in scenarios where resources are modified outside of the kapi.Cluster scope or where
	// resource modifications need to be reflected immediately
	DisableCaching bool
	// LeaderElectionConfig defines the configuration for leader election. By default it is disabled.
	// Enabling leader election allows multiple replicas of a kapi based controller or operator to be deployed to support high availabity
	LeaderElection LeaderElectionConfig
	// Namespaces defines the namespaces for which to invoke configured Reconcilers
	// An empty slice results in applicable Reconcilers being invoked for all namespaces
	Namespaces []string
	// CRDs defines any CRDs that the Cluster must recognise
	CRDs []CRDs
}

ClusterConfig defines information about how to interact with a specific k8s cluster

type CustomResource

type CustomResource[TSpec any, TStatus any, TScale any] struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   TSpec   `json:"spec,omitempty"`
	Status TStatus `json:"status,omitempty"`
	Scale  TScale  `json:"scale,omitempty"`
}

CustomResource defines a template for a struct that represents a K8s CustomResource with the conventional fields of Spec, Status and Scale. The typical use-case is to embed it in a descriptively named struct that represents the CR itself.

For example, the below defines an CR named ExampleResource that only exposes a Spec field.

ExampleResource struct {
	kapi.CustomResource[ExampleResourceSpec, kapi.FieldUndefined, kapi.FieldUndefined]
}

ExampleResourceSpec struct {
	ExampleData string `json:"exampleData"`
}

The conventional CR fields are defined as follows:

  • 'Spec' defines the main properties of the resource; its desired state.
  • 'Status', typically configured as a subresource in the CustomResourceDefinition, defines the current state.
  • 'Scale', typically configured as a subresource in the CustomResourceDefinition, defines the scaling properties of the resource.

Where any of the above fields are not required, they should be set to the type kapi.FieldUndefined

func (*CustomResource[TSpec, TStatus, TScale]) DeepCopyObject

func (e *CustomResource[TSpec, TStatus, TScale]) DeepCopyObject() runtime.Object

type CustomResourceList

type CustomResourceList[T client.Object] struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []T `json:"items"`
}

CustomResourceList defines a template for the list representation of zero or more CustomResource[T, T, T] items

func (*CustomResourceList[T]) DeepCopyObject

func (e *CustomResourceList[T]) DeepCopyObject() runtime.Object

type FieldUndefined

type FieldUndefined *struct{}

FieldUndefined is used to indicate that a CustomResource does not define a particular conventional field

type Hook

type Hook[T client.Object] struct {
	DefaulterFunc      func(ctx context.Context, resource T) error
	ValidateCreateFunc func(ctx context.Context, resource T) (warnings []string, err error)
	ValidateUpdateFunc func(ctx context.Context, oldResource, newResource T) (warnings []string, err error)
	ValidateDeleteFunc func(ctx context.Context, resource T) (warnings []string, err error)
}

Hook defines admission control functionality for validating and applying default values to resources before CRUD operations occur.

A hook is typically used for enforcing business rules or setting default values by implementing one or more of the following functions: - ValidateCreateFunc: Validates resources before creation - ValidateUpdateFunc: Validates resources before updates - ValidateDeleteFunc: Validates resources before deletion - DefaulterFunc: Sets default values for new resources Any of these functions can be omitted (left as nil) if the desired behavior is not required.

func (*Hook[T]) Default

func (h *Hook[T]) Default(ctx context.Context, obj runtime.Object) error

func (*Hook[T]) ValidateCreate

func (h *Hook[T]) ValidateCreate(ctx context.Context, obj runtime.Object) (admission.Warnings, error)

func (*Hook[T]) ValidateDelete

func (h *Hook[T]) ValidateDelete(ctx context.Context, obj runtime.Object) (admission.Warnings, error)

func (*Hook[T]) ValidateUpdate

func (h *Hook[T]) ValidateUpdate(ctx context.Context, oldObj, newObj runtime.Object) (admission.Warnings, error)

type KindType

type KindType runtime.Object

KindType is an interface that is implemented by any type that is based on kapi.CustomResource or kapi.CustomResourceLlist

type LeaderElectionConfig added in v1.1.0

type LeaderElectionConfig struct {
	// Enabled enables leader election for kpai based controllers or operators running multiple replicas to support high availabity
	Enabled bool
	// LockResource defines the name of the resource that will be used as the lock resource in leader elections.
	// This should be set to a unique value for the consuming application to avoid conflicts with other application's leader elections
	LockResource string
	// Namespace defines the namespace in which the leader election resource will be created.
	// This can be unset if the cluster.Config namespace is set to one or more namespaces, which will result in the first namespace in the list being used
	Namespace string
}

LeaderElectionConfig defines the configuration for the leader elections of high availablity deployments

type ListUndefined

type ListUndefined struct {
	CustomResourceList[*CustomResource[FieldUndefined, FieldUndefined, FieldUndefined]]
}

ListUndefined is used to indicate that a CustomResource does not have a list form, or the form is not relevant to the current use-case

type ObservabilityConfig

type ObservabilityConfig struct {
	// Context specifies the context to be used for logs generated by background operations or internal ctrl-runtime activity
	BackgroundContext context.Context
	// LogFunc should be set to a context aware, levelled and structured logger implementation.
	// - level defines the verbosity, where a higher value indicates and increased level of detail.
	// - message defines a summary of the log event
	// - attributes is a variadic list of variables that are parsed as kv pairs and used as structured log attributes
	LogFunc func(ctx context.Context, level int, msg string, attributes ...any)
	// MetricTimerFunc should be set to a suitable, context aware metric or trace func able to write count and duration metrics with dimensions/attributes
	MetricTimerFunc func(ctx context.Context, metric string) func(attributes ...string)
	// NewCorrelationCtx is defines a func that generates a new correlation context for the purposes of logs or tracing.
	// This is called at the start of each reconciliation allowing activities associated with it to be correlated
	NewCorrelationCtx func(ctx context.Context) context.Context
}

ObservabilityConfig defines the mechanisms by which logs, metrics and traces are processed

func UseSlog

func UseSlog(ctx context.Context, log *slog.Logger) ObservabilityConfig

UseSlog generates a kapi.ObservabilityConfig based on a passed slog.Logger

type ReconcileEventType

type ReconcileEventType int

ReconcileEventType defines the limited, summarised set of event types that can affect a k8s resource as presented to an invoked ReconcilerFunc

func (ReconcileEventType) String

func (r ReconcileEventType) String() string

type ReconcilerFilterFunc

type ReconcilerFilterFunc func(ResourceEventType, client.Object) bool

An ReconcilerFilterFunc is used to reduce the scope of a reconciler. ReconcilerFilterFuncs are invoked before ReconcilerFuncs and are

passed the object being modified and the type of the modification. The associated ReconcilerFunc is only invoked if the eventFilterFunc returns true.

type ReconcilerFunc

type ReconcilerFunc[T client.Object] func(ctx context.Context, eventType ReconcileEventType, resource T) error

ReconcilerFunc is a generic func that can be added to a kapi.Cluster.

It will be invoked with the relevant resource data whenever a resource is created, updated or deleted. When executing as a result of a delete event, `resource T` will be set to its zero-value

type ResourceEventType

type ResourceEventType int

ResourceEventType defines the types of events that can affect a k8s resource

This is used to limit the invocation of configured ReconcilerFuncs

func (ResourceEventType) String

func (r ResourceEventType) String() string

type Subresource added in v1.1.1

type Subresource string

Subresource represents a section of a resource that can be modified independently of the resource as a whole

const (
	SubresourceStatus Subresource = "status"
	SubresourceScale  Subresource = "scale"
)

Directories

Path Synopsis
cmd
internal
logconv
logconv provides a logr.Logger implementation based on a simple context-aware, structured logFunc
logconv provides a logr.Logger implementation based on a simple context-aware, structured logFunc

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL