Cluster IQ
Cluster IQ is a tool for making stock of the Openshift Clusters and its
resources running on the most common cloud providers and collects relevant
information about the compute resources, access routes and billing.
Metrics and monitoring is not part of the scope of this project, the main
purpose is to maintain and updated inventory of the clusters and offer a easier
way to identify, manage, and estimate costs.
Supported cloud providers
The scope of the project is to cover make stock on the most common public cloud
providers, but as the component dedicated to scrape data is decoupled, more
providers could be included in the future.
The following table shows the compatibility matrix and which features are
available for every cloud provider:
Cloud Provider |
Compute Resources |
Billing |
Managing |
AWS |
Yes |
Yes |
No |
Azure |
No |
No |
No |
GCP |
No |
No |
No |
Architecture
The following graph shows the architecture of this project:
Installation
This section explains how to deploy ClusterIQ and ClusterIQ Console.
Prerequisites:
Accounts Configuration
-
Create a folder called secrets
for saving the cloud credentials. This folder is ignored on this repo to keep your
credentials safe.
mkdir secrets
export CLUSTER_IQ_CREDENTIALS_FILE="./secrets/credentials"
⚠ Please take care and don't include them on the repo.
-
Create your credentials file with the AWS credentials of the accounts you
want to scrape. The file must follow the following format:
echo "
[ACCOUNT_NAME]
provider = {aws/gcp/azure}
user = XXXXXXX
key = YYYYYYY
billing_enabled = {true/false}
" >> $CLUSTER_IQ_CREDENTIALS_FILE
⚠ The values for provider
are: aws
, gcp
and azure
, but the
scraping is only supported for aws
by the moment. The credentials file
should be placed on the path secrets/*
to work with
docker/podman-compose
.
❗ This file structure was design to be generic, but it works
differently depending on the cloud provider. For AWS, user
refers to the
ACCESS_KEY
, and key
refers to SECRET_ACCESS_KEY
.
❗ Some Cloud Providers has extra costs when querying the Billing
APIs (like AWS Cost Explorer). Be careful when enable this module. Check your
account before enabling it.
Openshift Deployment
-
Prepare your cluster and CLI
oc login ...
export NAMESPACE="cluster-iq"
oc new-project $NAMESPACE
-
Create a secret containing this information is needed. To create the secret,
use the following command:
oc create secret generic credentials -n $NAMESPACE \
--from-file=credentials=$CLUSTER_IQ_CREDENTIALS_FILE
-
Configure your cluster-iq deployment using
./deployments/openshift/00_config.yaml
file. For more information about the
supported parameters, check the Configuration Section.
oc apply -n $NAMESPACE -f ./deployments/openshift/00_config.yaml
-
Create the Service Account for Cluster-IQ, and bind it with the anyuid
SCC.
oc apply -n $NAMESPACE -f ./deployments/openshift/01_service_account.yaml
oc adm policy add-scc-to-user anyuid -z cluster-iq
-
Deploy and configure the Database:
oc create configmap -n $NAMESPACE pgsql-init --from-file=init.sql=./db/sql/init.sql
oc apply -n $NAMESPACE -f ./deployments/openshift/02_database.yaml
-
Deploy API:
oc apply -n $NAMESPACE -f ./deployments/openshift/03_api.yaml
-
Reconfigure ConfigMap with API's route hostname.
ROUTE_HOSTNAME=$(oc get route api -o jsonpath='{.spec.host}')
oc get cm config -o yaml | sed 's/REACT_APP_CIQ_API_URL: .*/REACT_APP_CIQ_API_URL: https:\/\/'$ROUTE_HOSTNAME'\/api\/v1/
-
Deploy Scanner:
oc apply -n $NAMESPACE -f ./deployments/openshift/04_scanner.yaml
-
Deploy Console:
oc apply -n $NAMESPACE -f ./deployments/openshift/05_console.yaml
Local Deployment (for development)
For deploying ClusterIQ in local for development purposes, check the following
document
Configuration
Available configuration via Env Vars:
Key |
Value |
Description |
CIQ_API_HOST |
string (Default: "127.0.0.1") |
Inventory API listen host |
CIQ_API_PORT |
string (Default: "6379") |
Inventory API listen port |
CIQ_API_PUBLIC_HOST |
string (Default: "") |
Inventory API public endpoint |
CIQ_DB_HOST |
string (Default: "127.0.0.1") |
Inventory database listen host |
CIQ_DB_PORT |
string (Default: "6379") |
Inventory database listen port |
CIQ_DB_PASS |
string (Default: "") |
Inventory database password |
CIQ_CREDS_FILE |
string (Default: "") |
Cloud providers accounts credentials file |
These variables are defined in ./<PROJECT_FOLDER>/.env
to be used on Makefile
and on ./<PROJECT_FOLDER>/deploy/openshift/config.yaml
to deploy it on Openshift.
Scanners
As each cloud provider has a different API and because of this, a specific
scanner adapted to the provider is required.
To build every available scanner, use the following makefile rules:
make build-scanners
By default, every build rule will be performed using the Dockerfile for each
specific scanner
AWS Scanner
The scanner should run periodically to keep the inventory up to date.
# Building
make build-aws-scanner
API Server
The API server interacts between the UI and the DB.
# Building
make build-api
# Run
make start-api