Intent Driven Orchestration Planner
![planner.png](https://github.com/intel/intent-driven-orchestration/raw/v0.2.0/planner.png)
Today’s container orchestration engine solutions promote a model of requesting a specific quantity of resources (e.g.
number of vCPUs.), a quantity range (e.g. min/max number of vCPUs) or not specifying them at all, to support the
appropriate placement of workloads. This applies at the Cloud and at the Edge using Kubernetes
or K3s (although the concept is not limited to Kubernetes based systems). The end state for the
resource allocation is declared, but that state is an imperative definition of what resources are required. This model
has proven effective, but has a number of challenges:
- experience shows that often suboptimal information is declared leading to over allocating resource or sub-optimal
performance,
- these declarations have context that the users needs to understand, but often do not (e.g. resource requests do not
dynamically change based on the chosen instance type that underpins a cluster).
This project proposes a new way to do orchestration – moving from an imperative model to an intent driven way, in which
the user express their intents in form of objectives (e.g. as required latency, throughput, or reliability targets) and
the orchestration stack itself determines what resources in the infrastructure are required to fulfill the objectives.
This new approach will continue to benefit from community investments in scheduling (determining when & where
to place workloads) and be augmented with a continuous running planning loop determining what/how to configure
in the system.
While this repository holds the planning component implementation it is key to note that it works closely together with
schedulers, the observability and potentially analytics stacks. It is key that those schedulers are fed with the right
information to make their placement decisions.
The planning component is essential for enabling Intent Driven Orchestration (IDO), as it will break down the higher-
level objectives (e.g. a latency compliance targets) into dynamic actionable plans (e.g. policies for platform resource
allocation, dynamic vertical & horizontal scaling, etc.). This enables hierarchical controlled systems in which Service
Level Objectives(SLOs) are broken down to finer grained goal settings for the platform. A key input to the planning
components, to determine the right set of actions, are the models that describe workload behaviour and the platform
effects on the associated Quality of Service (QoS).
The initial goal is to focus on managing the QoS of a set of instances of a workload. Subsequently, the goals are
expected to shift to End-to-End (E2E) management of QoS parameters in multi-tenant environments with mixed criticality.
It is also a goal that the planning components will be easily extended and administrators will have the ability to swap
in and out functionality through a plugin model. The architecture is intended to be extensible to support proactive
planning and coordination between planners to fulfill overarching intents. It is expected that the imperative model and
an Intent Driven Orchestration model will coexist.
Example
To see the benefit of this model please review the deployment and associated objective manifest files:
- The Deployment spec only defines which container image to use (hence
fully abstracted from platform resources. This is different but still analogous to Serverless methodology).
- The Objective spec defines a P-95 latency compliance target of less than
4ms as measured by e.g. a Service Mesh and requests an availability target of 2 nines (i.e. 99% availability).
Running the planning component
Step 1) add the CRDs:
$ k apply -f artefacts/intents_crds_v1alpha1.yaml
Step 2) deploy the planner (make sure to adapt the configs to your environment):
$ k create ns ido
$ k apply -n ido -f artefacts/deploy/manifest.yaml
Step 3) deploy the actuators of interest using:
$ k apply -n ido -f plugins/<name>/<name>.yaml
These steps should be followed by setting up your default profiles (if needed).
We recommend the usage of a service mesh like Linkerd or Istio to ensure
encryption and monitoring capabilities for the subcomponents of the planning framework themselves. After creating the
namespace, enable auto-injection; For Linkerd do:
$ k annotate ns ido linkerd.io/inject=enabled
or for Istio use:
$ k label namespace ido istio-injection=enabled --overwrite
For more information on running and configuring the planner see the getting started guide.
Internals
There are three key packages enabling the Intent Driven Orchestration model:
- The framework which enables the continuous running feedback loop monitoring the workload resources and matching the
current to the desired states of the objectives in the systems.
- The planning component which actively works on tuning resource requirements to bring the current state of the
objectives closer to the desired states.
- A set of actuators which enable the planner to:
- predict the effect an orchestration activity has,
- perform that action if required and,
- (optionally) re-calculates/re-training the underling model that enables the earlier mentioned prediction.
Documentation and implementation notes for these components can be found here:
Furthermore, notes on the pluggability can be found here and general design notes can be
found here.
Communication and contribution
Report a bug by filing a new issue.
Contribute by opening a pull request. Please also see
CONTRIBUTING for more information.
Learn about pull requests.
Reporting a Potential Security Vulnerability: If you have discovered potential security vulnerability in
Intent-Driven Orchestration, please send an e-mail to secure@intel.com. For issues related to Intel Products, please
visit Intel Security Center.
It is important to include the following details:
- The projects and versions affected
- Detailed description of the vulnerability
- Information on known exploits
Vulnerability information is extremely sensitive. Please encrypt all security vulnerability reports using our
PGP key.
A member of the Intel Product Security Team will review your e-mail and contact you to collaborate on resolving the
issue. For more information on how Intel works to resolve security issues, see:
vulnerability handling guidelines.