Pravega Operator
Project status: alpha
The project is currently alpha. While no breaking API changes are currently planned, we reserve the right to address bugs and change the API before the project is declared stable.
Table of Contents
Overview
Pravega is an open source distributed storage service implementing Streams. It offers Stream as the main primitive for the foundation of reliable storage systems: a high-performance, durable, elastic, and unlimited append-only byte stream with strict ordering and consistency.
The Pravega Operator manages Pravega clusters deployed to Kubernetes and automates tasks related to operating a Pravega cluster.
- Create and destroy a Pravega cluster
- Resize cluster
- Rolling upgrades
Requirements
- Kubernetes 1.15+
- Helm 3+
- An existing Apache Zookeeper 3.6.1 cluster. This can be easily deployed using our Zookeeper operator
- An existing Apache Bookkeeper 4.9.2 cluster. This can be easily deployed using our BookKeeper Operator
Quickstart
Install the Operator
Note: If you are running on Google Kubernetes Engine (GKE), please check this first.
We recommend using helm to deploy a Pravega Operator. Check out the helm installation document for instructions.
Deploying in Test Mode
The Operator can be run in a "test" mode if we want to create pravega on minikube or on a cluster with very limited resources by enabling testmode: true
in values.yaml
file. Operator running in test mode skips minimum replica requirement checks on Pravega components. "Test" mode ensures a bare minimum setup of pravega and is not recommended to be used in production environments.
Upgrade the Operator
For upgrading the pravega operator check the document operator-upgrade
Install a sample Pravega cluster
Set up Tier 2 Storage
Pravega requires a long term storage provider known as longtermStorage.
Check out the available options for long term storage and how to configure it.
For demo purposes, you can quickly install a toy NFS server.
$ helm install stable/nfs-server-provisioner --generate-name
And create a PVC for longtermStorage that utilizes it.
$ kubectl create -f ./example/pvc-tier2.yaml
Install a Pravega cluster
Use Helm to install a sample Pravega cluster with release name bar
.
$ helm install bar charts/pravega --set zookeeperUri=[ZOOKEEPER_HOST] --set bookkeeperUri=[BOOKKEEPER_SVC] --set storage.longtermStorage.filesystem.pvc=[TIER2_NAME]
where:
- [ZOOKEEPER_HOST] is the host or IP address of your Zookeeper deployment (e.g.
zookeeper-client:2181
). Multiple Zookeeper URIs can be specified, use a comma-separated list and DO NOT leave any spaces in between (e.g. zookeeper-0:2181,zookeeper-1:2181,zookeeper-2:2181
).
- [BOOKKEEPER_SVC] is the is the name of the headless service of your Bookkeeper deployment (e.g.
bookkeeper-bookie-0.bookkeeper-bookie-headless.default.svc.cluster.local:3181,bookkeeper-bookie-1.bookkeeper-bookie-headless.default.svc.cluster.local:3181,bookkeeper-bookie-2.bookkeeper-bookie-headless.default.svc.cluster.local:3181
).
- [TIER2_NAME] is the longtermStorage
PersistentVolumeClaim
name. pravega-tier2
if you created the PVC above.
Check out the Pravega Helm Chart for the complete list of configurable parameters.
Verify that the cluster instances and its components are being created.
$ kubectl get PravegaCluster
NAME VERSION DESIRED MEMBERS READY MEMBERS AGE
bar-pravega 0.4.0 7 0 25s
After a couple of minutes, all cluster members should become ready.
$ kubectl get PravegaCluster
NAME VERSION DESIRED MEMBERS READY MEMBERS AGE
bar-pravega 0.4.0 7 7 2m
$ kubectl get all -l pravega_cluster=bar-pravega
NAME READY STATUS RESTARTS AGE
pod/bar-pravega-controller-64ff87fc49-kqp9k 1/1 Running 0 2m
pod/bar-pravega-segmentstore-0 1/1 Running 0 2m
pod/bar-pravega-segmentstore-1 1/1 Running 0 1m
pod/bar-pravega-segmentstore-2 1/1 Running 0 30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/bar-pravega-controller ClusterIP 10.23.244.3 <none> 10080/TCP,9090/TCP 2m
service/bar-pravega-segmentstore-headless ClusterIP None <none> 12345/TCP 2m
NAME DESIRED CURRENT READY AGE
replicaset.apps/bar-pravega-controller-64ff87fc49 1 1 1 2m
NAME DESIRED CURRENT AGE
statefulset.apps/bar-pravega-segmentstore 3 3 2m
By default, a PravegaCluster
instance is only accessible within the cluster through the Controller ClusterIP
service. From within the Kubernetes cluster, a client can connect to Pravega at:
tcp://<pravega-name>-pravega-controller.<namespace>:9090
And the REST
management interface is available at:
http://<pravega-name>-pravega-controller.<namespace>:10080/
Check out the external access documentation if your clients need to connect to Pravega from outside Kubernetes.
Scale a Pravega cluster
You can scale Pravega components independently by modifying their corresponding field in the Pravega resource spec. You can either kubectl edit
the cluster or kubectl patch
it. If you edit it, update the number of replicas for BookKeeper, Controller, and/or Segment Store and save the updated spec.
Example of patching the Pravega resource to scale the Segment Store instances to 4.
kubectl patch PravegaCluster <pravega-name> --type='json' -p='[{"op": "replace", "path": "/spec/pravega/segmentStoreReplicas", "value": 4}]'
Upgrade a Pravega cluster
Check out the upgrade guide.
Uninstall the Pravega cluster
$ helm uninstall bar
$ kubectl delete -f ./example/pvc-tier2.yaml
Uninstall the Operator
Note that the Pravega clusters managed by the Pravega operator will NOT be deleted even if the operator is uninstalled.
$ helm uninstall foo
If you want to delete the Pravega clusters, make sure to do it before uninstalling the operator.
Manual installation
You can also manually install/uninstall the operator and Pravega with kubectl
commands. Check out the manual installation document for instructions.
Configuration
Check out the configuration document.
Development
Check out the development guide.
Releases
The latest Pravega releases can be found on the Github Release project page.
Troubleshooting
Check out the troubleshooting document.