Pravega Operator
Project status: alpha
The project is currently alpha. While no breaking API changes are currently planned, we reserve the right to address bugs and change the API before the project is declared stable.
Table of Contents
Overview
Pravega is an open source distributed storage service implementing Streams. It offers Stream as the main primitive for the foundation of reliable storage systems: a high-performance, durable, elastic, and unlimited append-only byte stream with strict ordering and consistency.
The Pravega Operator manages Pravega clusters deployed to Kubernetes and automates tasks related to operating a Pravega cluster.
- Create and destroy a Pravega cluster
- Resize cluster
- Rolling upgrades (experimental)
Requirements
- Kubernetes 1.9+
- Helm 2.10+
- An existing Apache Zookeeper 3.5 cluster. This can be easily deployed using our Zookeeper operator
Quickstart
Install the Operator
Note: If you are running on Google Kubernetes Engine (GKE), please check this first.
Use Helm to quickly deploy a Pravega operator with the release name foo
.
$ helm install charts/pravega-operator --name foo
Verify that the Pravega Operator is running.
$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
foo-pravega-operator 1 1 1 1 17s
Install a sample Pravega cluster
Set up Tier 2 Storage
Pravega requires a long term storage provider known as Tier 2 storage.
Check out the available options for Tier 2 and how to configure it.
For demo purposes, you can quickly install a toy NFS server.
$ helm install stable/nfs-server-provisioner
And create a PVC for Tier 2 that utilizes it.
$ kubectl create -f ./example/pvc-tier2.yaml
Install a Pravega cluster
Use Helm to install a sample Pravega cluster with release name bar
.
$ helm install charts/pravega --name bar --set zookeeperUri=[ZOOKEEPER_HOST] --set pravega.tier2=[TIER2_NAME]
where:
[ZOOKEEPER_HOST]
is the host or IP address of your Zookeeper deployment (e.g. zk-client:2181
). Multiple Zookeeper URIs can be specified, use a comma-separated list and DO NOT leave any spaces in between (e.g. zk-0:2181,zk-1:2181,zk-2:2181
).
[TIER2_NAME]
is the Tier 2 PersistentVolumeClaim
name. pravega-tier2
if you created the PVC above.
Check out the Pravega Helm Chart for more a complete list of installation parameters.
Verify that the cluster instances and its components are being created.
$ kubectl get PravegaCluster
NAME VERSION DESIRED MEMBERS READY MEMBERS AGE
bar-pravega 0.4.0 7 0 25s
After a couple of minutes, all cluster members should become ready.
$ kubectl get PravegaCluster
NAME VERSION DESIRED MEMBERS READY MEMBERS AGE
bar-pravega 0.4.0 7 7 2m
$ kubectl get all -l pravega_cluster=bar-pravega
NAME READY STATUS RESTARTS AGE
pod/bar-bookie-0 1/1 Running 0 2m
pod/bar-bookie-1 1/1 Running 0 2m
pod/bar-bookie-2 1/1 Running 0 2m
pod/bar-pravega-controller-64ff87fc49-kqp9k 1/1 Running 0 2m
pod/bar-pravega-segmentstore-0 1/1 Running 0 2m
pod/bar-pravega-segmentstore-1 1/1 Running 0 1m
pod/bar-pravega-segmentstore-2 1/1 Running 0 30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/bar-bookie-headless ClusterIP None <none> 3181/TCP 2m
service/bar-pravega-controller ClusterIP 10.23.244.3 <none> 10080/TCP,9090/TCP 2m
service/bar-pravega-segmentstore-headless ClusterIP None <none> 12345/TCP 2m
NAME DESIRED CURRENT READY AGE
replicaset.apps/bar-pravega-controller-64ff87fc49 1 1 1 2m
NAME DESIRED CURRENT AGE
statefulset.apps/bar-bookie 3 3 2m
statefulset.apps/bar-pravega-segmentstore 3 3 2m
By default, a PravegaCluster
instance is only accessible within the cluster through the Controller ClusterIP
service. From within the Kubernetes cluster, a client can connect to Pravega at:
tcp://<pravega-name>-pravega-controller.<namespace>:9090
And the REST
management interface is available at:
http://<pravega-name>-pravega-controller.<namespace>:10080/
Check out the external access documentation if your clients need to connect to Pravega from outside Kubernetes.
Scale a Pravega cluster
You can scale Pravega components independently by modifying their corresponding field in the Pravega resource spec. You can either kubectl edit
the cluster or kubectl patch
it. If you edit it, update the number of replicas for BookKeeper, Controller, and/or Segment Store and save the updated spec.
Example of patching the Pravega resource to scale the Segment Store instances to 4.
kubectl patch PravegaCluster <pravega-name> --type='json' -p='[{"op": "replace", "path": "/spec/pravega/segmentStoreReplicas", "value": 4}]'
Upgrade a Pravega cluster
Check out the upgrade guide.
Uninstall the Pravega cluster
$ helm delete bar --purge
$ kubectl delete -f ./example/pvc-tier2.yaml
Uninstall the Operator
Note that the Pravega clusters managed by the Pravega operator will NOT be deleted even if the operator is uninstalled.
If you want to delete Pravega clusters, make sure to do it before uninstalling the operator.
$ helm delete foo --purge
Manual installation
You can also manually install/uninstall the operator and Pravega with kubectl
commands. Check out the manual installation document for instructions.
Configuration
Check out the configuration document.
Development
Check out the development guide.
Releases
The latest Pravega releases can be found on the Github Release project page.
Troubleshooting
Check out the troubleshooting document.