opendatahub-operator

command module
v2.16.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Aug 9, 2024 License: Apache-2.0 Imports: 41 Imported by: 0

README

This operator is the primary operator for Open Data Hub. It is responsible for enabling Data science applications like Jupyter Notebooks, Modelmesh serving, Datascience pipelines etc. The operator makes use of DataScienceCluster CRD to deploy and configure these applications.

Table of contents

Usage

Prerequisites

If single model serving configuration is used or if Kserve component is used then please make sure to install the following operators before proceeding to create a DSCI and DSC instances.

Additionally installing Authorino operator & Service Mesh operator enhances user-experience by providing a single sign on experience.

Installation

  • The latest version of operator can be installed from the community-operators catalog on OperatorHub.

    ODH operator in OperatorHub

    Please note that the latest releases are made in the Fast channel.

  • It can also be build and installed from source manually, see the Developer guide for further instructions.

    1. Subscribe to operator by creating following subscription

      cat <<EOF | oc create -f -
      apiVersion: operators.coreos.com/v1alpha1
      kind: Subscription
      metadata:
        name: opendatahub-operator
        namespace: openshift-operators
      spec:
        channel: fast
        name: opendatahub-operator
        source: community-operators
        sourceNamespace: openshift-marketplace
      EOF
      
    2. Create DSCInitialization CR manually. You can also use operator to create default DSCI CR by removing env variable DISABLE_DSC_CONFIG from CSV or changing the value to "false", followed by restarting the operator pod.

    3. Create DataScienceCluster CR to enable components

Developer Guide

Pre-requisites

  • Go version go1.21
  • operator-sdk version can be updated to v1.31.1

Download manifests

The get_all_manifests.sh script facilitates the process of fetching manifests from remote git repositories. It is configured to work with a predefined map of components and their corresponding manifest locations.

Structure of COMPONENT_MANIFESTS

Each component is associated with its manifest location in the COMPONENT_MANIFESTS map. The key is the component's name, and the value is its location, formatted as <repo-org>:<repo-name>:<branch-name>:<source-folder>:<target-folder>

Workflow

  1. The script clones the remote repository <repo-org>/<repo-name> from the specified <branch-name>.
  2. It then copies the content from the relative path <source-folder> to the local odh-manifests/<target-folder> folder.

Local Storage

The script utilizes a local, empty folder named odh-manifests to host all required manifests, sourced either directly from the component’s source repository or the default odh-manifests git repository.

Adding New Components

To include a new component in the list of manifest repositories, simply extend the COMPONENT_MANIFESTS map with a new entry, as shown below:

declare -A COMPONENT_MANIFESTS=(
  // existing components ...
  ["new-component"]="<repo-org>:<repo-name>:<branch-name>:<source-folder>:<target-folder>"
)

Customizing Manifests Source

You have the flexibility to change the source of the manifests. Invoke the get_all_manifests.sh script with specific flags, as illustrated below:

./get_all_manifests.sh --odh-dashboard="maistra:odh-dashboard:test-manifests:manifests:odh-dashboard"

If the flag name matches components key defined in COMPONENT_MANIFESTS it will overwrite its location, otherwise the command will fail.

for local development
make get-manifests

This first cleanup your local odh-manifests folder. Ensure back up before run this command if you have local changes of manifests want to reuse later.

for build operator image
make image-build

By default, building an image without any local changes(as a clean build) This is what the production build system is doing.

In order to build an image with local odh-manifests folder, to set IMAGE_BUILD_FLAGS ="--build-arg USE_LOCAL=true" in make. e.g make image-build -e IMAGE_BUILD_FLAGS="--build-arg USE_LOCAL=true"

Build Image

  • Custom operator image can be built using your local repository

    make image -e IMG=quay.io/<username>/opendatahub-operator:<custom-tag>
    

    or (for example to user vhire)

    make image -e IMAGE_OWNER=vhire
    

    The default image used is quay.io/opendatahub/opendatahub-operator:dev-0.0.1 when not supply argument for make image

  • Once the image is created, the operator can be deployed either directly, or through OLM. For each deployment method a kubeconfig should be exported

    export KUBECONFIG=<path to kubeconfig>
    

Deployment

Deploying operator locally

  • Define operator namespace

    export OPERATOR_NAMESPACE=<namespace-to-install-operator>
    
  • Deploy the created image in your cluster using following command:

    make deploy -e IMG=quay.io/<username>/opendatahub-operator:<custom-tag> -e OPERATOR_NAMESPACE=<namespace-to-install-operator>
    
  • To remove resources created during installation use:

    make undeploy
    

Deploying operator using OLM

  • To create a new bundle in defined operator namespace, run following command:

    export OPERATOR_NAMESPACE=<namespace-to-install-operator>
    make bundle
    

    Note : Skip the above step if you want to run the existing operator bundle.

  • Build Bundle Image:

    make bundle-build bundle-push BUNDLE_IMG=quay.io/<username>/opendatahub-operator-bundle:<VERSION>
    
  • Run the Bundle on a cluster:

    operator-sdk run bundle quay.io/<username>/opendatahub-operator-bundle:<VERSION> --namespace $OPERATOR_NAMESPACE --decompression-image quay.io/project-codeflare/busybox:1.36
    

Test with customized manifests

There are 2 ways to test your changes with modification:

  1. Each component in the DataScienceCluster CR has devFlags.manifests field, which can be used to pull down the manifests from the remote git repos of the respective components. By using this method, it overwrites manifests and creates customized resources for the respective components.

  2. [Under implementation] build operator image with local manifests.

Update API docs

Whenever a new api is added or a new field is added to the CRD, please make sure to run the command:

make api-docs 

This will ensure that the doc for the apis are updated accordingly.

Enabled logging

Controller level

Logger on all controllers can only be changed from CSV with parameters: --log-mode devel valid value: "" (as default) || prod || production || devel || development

This mainly impacts logging for operator pod startup, generating common resource, monitoring deployment.

--log-mode value mapping Log level Comments
devel debug / 0 lowest level
"" info / 1 default option
default info / 1 default option
prod error / 2 highest level

Component level

Logger on components can be changed by DSCI devFlags during runtime. By default, if not set .spec.devFlags.logmode, it uses INFO level Modification applies to all components, not only these "Managed" ones. Update DSCI CR with .spec.devFlags.logmode, see example :

apiVersion: dscinitialization.opendatahub.io/v1
kind: DSCInitialization
metadata:
  name: default-dsci
spec:
  devFlags:
    logmode: development
  ...

Avaiable value for logmode is "devel", "development", "prod", "production". The first two work the same set to DEBUG level; the later two work the same as using ERROR level.

.spec.devFlags.logmode stacktrace level verbosity Output Comments
devel WARN INFO Console lowest level, using epoch time
development WARN INFO Console same as devel
"" ERROR INFO JSON default option
prod ERROR INFO JSON highest level, using human readable timestamp
production ERROR INFO JSON same as prod

Example DSCInitialization

Below is the default DSCI CR config

kind: DSCInitialization
apiVersion: dscinitialization.opendatahub.io/v1
metadata:
  name: default-dsci
spec:
  applicationsNamespace: opendatahub
  monitoring:
    managementState: Managed
    namespace: opendatahub
  serviceMesh:
    controlPlane:
      metricsCollection: Istio
      name: data-science-smcp
      namespace: istio-system
    managementState: Managed
  trustedCABundle:
    customCABundle: ''
    managementState: Managed

Apply this example with modification for your usage.

Example DataScienceCluster

When the operator is installed successfully in the cluster, a user can create a DataScienceCluster CR to enable ODH components. At a given time, ODH supports only one instance of the CR, which can be updated to get custom list of components.

  1. Enable all components
apiVersion: datasciencecluster.opendatahub.io/v1
kind: DataScienceCluster
metadata:
  name: default-dsc
spec:
  components:
    codeflare:
      managementState: Managed
    dashboard:
      managementState: Managed
    datasciencepipelines:
      managementState: Managed
    kserve:
      managementState: Managed
      serving:
        ingressGateway:
          certificate:
            type: OpenshiftDefaultIngress
        managementState: Managed
        name: knative-serving
    kueue:
      managementState: Managed
    modelmeshserving:
      managementState: Managed
    modelregistry:
      managementState: Managed
    ray:
      managementState: Managed
    trainingoperator:
      managementState: Managed
    trustyai:
      managementState: Managed
    workbenches:
      managementState: Managed
  1. Enable only Dashboard and Workbenches
apiVersion: datasciencecluster.opendatahub.io/v1
kind: DataScienceCluster
metadata:
  name: example
spec:
  components:
    dashboard:
      managementState: Managed
    workbenches:
      managementState: Managed

Note: Default value for a component is false.

Run functional Tests

The functional tests are writted based on ginkgo and gomega. In order to run the tests, the user needs to setup the envtest which provides a mocked kubernetes cluster. A detailed explanation on how to configure envtest is provided here.

To run the test on individual controllers, change directory into the contorller's folder and run

ginkgo -v

This provides detailed logs of the test spec.

Note: When runninng tests for each controller, make sure to add the BinaryAssetsDirectory attribute in the envtest.Environment in the suite_test.go file. The value should point to the path where the envtest binaries are installed.

In order to run tests for all the controllers, we can use the make command

make unit-test

Note: The make command should be executed on the root project level.

Run e2e Tests

A user can run the e2e tests in the same namespace as the operator. To deploy opendatahub-operator refer to this section. The following environment variables must be set when running locally:

export KUBECONFIG=/path/to/kubeconfig

Ensure when testing RHODS operator in dev mode, no ODH CSV exists Once the above variables are set, run the following:

make e2e-test

Additional flags that can be passed to e2e-tests by setting up E2E_TEST_FLAGS variable. Following table lists all the available flags to run the tests:

Flag Description Default value
--skip-deletion To skip running of dsc-deletion test that includes deleting DataScienceCluster resources. Assign this variable to true to skip DataScienceCluster deletion. false

Example command to run full test suite skipping the test for DataScienceCluster deletion.

make e2e-test -e OPERATOR_NAMESPACE=<namespace> -e E2E_TEST_FLAGS="--skip-deletion=true"

API Overview

Please refer to api documentation

Component Integration

Please refer to components docs

Troubleshooting

Please refer to troubleshooting documentation

Upgrade testing

Please refer to upgrade testing documentation

Documentation

The Go Gopher

There is no documentation for this package.

Directories

Path Synopsis
apis
datasciencecluster/v1
Package v1 contains API Schema definitions for the datasciencecluster v1 API group
Package v1 contains API Schema definitions for the datasciencecluster v1 API group
dscinitialization/v1
Package v1 contains API Schema definitions for the dscinitialization v1 API group
Package v1 contains API Schema definitions for the dscinitialization v1 API group
features/v1
Package v1 contains API Schema definitions for the feature v1 API group
Package v1 contains API Schema definitions for the feature v1 API group
infrastructure/v1
+groupName=datasciencecluster.opendatahub.io
+groupName=datasciencecluster.opendatahub.io
+groupName=datasciencecluster.opendatahub.io
+groupName=datasciencecluster.opendatahub.io
codeflare
Package codeflare provides utility functions to config CodeFlare as part of the stack which makes managing distributed compute infrastructure in the cloud easy and intuitive for Data Scientists +groupName=datasciencecluster.opendatahub.io
Package codeflare provides utility functions to config CodeFlare as part of the stack which makes managing distributed compute infrastructure in the cloud easy and intuitive for Data Scientists +groupName=datasciencecluster.opendatahub.io
dashboard
Package dashboard provides utility functions to config Open Data Hub Dashboard: A web dashboard that displays installed Open Data Hub components with easy access to component UIs and documentation +groupName=datasciencecluster.opendatahub.io
Package dashboard provides utility functions to config Open Data Hub Dashboard: A web dashboard that displays installed Open Data Hub components with easy access to component UIs and documentation +groupName=datasciencecluster.opendatahub.io
datasciencepipelines
Package datasciencepipelines provides utility functions to config Data Science Pipelines: Pipeline solution for end to end MLOps workflows that support the Kubeflow Pipelines SDK, Tekton and Argo Workflows.
Package datasciencepipelines provides utility functions to config Data Science Pipelines: Pipeline solution for end to end MLOps workflows that support the Kubeflow Pipelines SDK, Tekton and Argo Workflows.
kserve
Package kserve provides utility functions to config Kserve as the Controller for serving ML models on arbitrary frameworks +groupName=datasciencecluster.opendatahub.io
Package kserve provides utility functions to config Kserve as the Controller for serving ML models on arbitrary frameworks +groupName=datasciencecluster.opendatahub.io
kueue
+groupName=datasciencecluster.opendatahub.io
+groupName=datasciencecluster.opendatahub.io
modelmeshserving
Package modelmeshserving provides utility functions to config MoModelMesh, a general-purpose model serving management/routing layer +groupName=datasciencecluster.opendatahub.io
Package modelmeshserving provides utility functions to config MoModelMesh, a general-purpose model serving management/routing layer +groupName=datasciencecluster.opendatahub.io
modelregistry
Package modelregistry provides utility functions to config ModelRegistry, an ML Model metadata repository service +groupName=datasciencecluster.opendatahub.io
Package modelregistry provides utility functions to config ModelRegistry, an ML Model metadata repository service +groupName=datasciencecluster.opendatahub.io
ray
Package ray provides utility functions to config Ray as part of the stack which makes managing distributed compute infrastructure in the cloud easy and intuitive for Data Scientists +groupName=datasciencecluster.opendatahub.io
Package ray provides utility functions to config Ray as part of the stack which makes managing distributed compute infrastructure in the cloud easy and intuitive for Data Scientists +groupName=datasciencecluster.opendatahub.io
trainingoperator
Package trainingoperator provides utility functions to config trainingoperator as part of the stack which makes managing distributed compute infrastructure in the cloud easy and intuitive for Data Scientists +groupName=datasciencecluster.opendatahub.io
Package trainingoperator provides utility functions to config trainingoperator as part of the stack which makes managing distributed compute infrastructure in the cloud easy and intuitive for Data Scientists +groupName=datasciencecluster.opendatahub.io
trustyai
Package trustyai provides utility functions to config TrustyAI, a bias/fairness and explainability toolkit +groupName=datasciencecluster.opendatahub.io
Package trustyai provides utility functions to config TrustyAI, a bias/fairness and explainability toolkit +groupName=datasciencecluster.opendatahub.io
workbenches
Package workbenches provides utility functions to config Workbenches to secure Jupyter Notebook in Kubernetes environments with support for OAuth +groupName=datasciencecluster.opendatahub.io
Package workbenches provides utility functions to config Workbenches to secure Jupyter Notebook in Kubernetes environments with support for OAuth +groupName=datasciencecluster.opendatahub.io
controllers
certconfigmapgenerator
Package certconfigmapgenerator contains generator logic of add cert configmap resource in user namespaces
Package certconfigmapgenerator contains generator logic of add cert configmap resource in user namespaces
datasciencecluster
Package datasciencecluster contains controller logic of CRD DataScienceCluster
Package datasciencecluster contains controller logic of CRD DataScienceCluster
dscinitialization
Package dscinitialization contains controller logic of CRD DSCInitialization.
Package dscinitialization contains controller logic of CRD DSCInitialization.
secretgenerator
Package secretgenerator contains generator logic of secret resources used in Open Data Hub operator
Package secretgenerator contains generator logic of secret resources used in Open Data Hub operator
status
Package status provides a generic way to report status and conditions for any resource of type client.Object.
Package status provides a generic way to report status and conditions for any resource of type client.Object.
pkg
cluster
Package cluster contains utility functions used to operate on cluster resources.
Package cluster contains utility functions used to operate on cluster resources.
common
Package common contains utility functions used by different components for cluster related common operations, refer to package cluster
Package common contains utility functions used by different components for cluster related common operations, refer to package cluster
deploy
Package deploy provides utility functions used by each component to deploy manifests to the cluster.
Package deploy provides utility functions used by each component to deploy manifests to the cluster.
trustedcabundle
Package trustedcabundle provides utility functions to create and check trusted CA bundle configmap from DSCI CRD
Package trustedcabundle provides utility functions to create and check trusted CA bundle configmap from DSCI CRD
upgrade
Package upgrade provides functions of upgrade ODH from v1 to v2 and vaiours v2 versions.
Package upgrade provides functions of upgrade ODH from v1 to v2 and vaiours v2 versions.
tests

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL