OpenEBS CStor CSI Driver
CSI driver implementation for OpenEBS CStor storage engine.
Project Status
This project is under active development and considered to be in Alpha state.
The current implementation supports the following for CStor Volumes:
- Provisioning and De-provisioning with ext4 filesystems
- Snapshots and clones
- Volume Expansion
Usage
Prerequisites
Before setting up OpenEBS CStor CSI driver make sure your Kubernetes Cluster
meets the following prerequisites:
- You will need to have Kubernetes version 1.14 or higher
- You will need to have OpenEBS Version 1.2 or higher installed.
The steps to install OpenEBS are here
- CStor CSI driver operates on the cStor Pools provisioned using the new schema called CSPC.
Steps to provision the pools using the same are here
- iSCSI initiator utils installed on all the worker nodes
- You have access to install RBAC components into kube-system namespace.
The OpenEBS CStor CSI driver components are installed in kube-system
namespace to allow them to be flagged as system critical components.
- You will need to turn on ExpandCSIVolumes and ExpandInUsePersistentVolumes feature gates on kubelets and kube-apiserver.
Setup OpenEBS CStor CSI Driver
OpenEBS CStor CSI driver comprises of 2 components:
- A controller component launched as a StatefulSet,
implementing the CSI controller services. The Control Plane
services are responsible for creating/deleting the required
OpenEBS Volume.
- A node component that runs as a DaemonSet,
implementing the CSI node services. The node component is
responsible for performing the iSCSI connection management and
connecting to the OpenEBS Volume.
OpenEBS CStor CSI driver components can be installed by running the
following command.
The node components make use of the host iSCSI binaries for iSCSI
connection management. Depending on the OS, the spec will have to
be modified to load the required iSCSI files into the node pods.
Depending on the OS select the appropriate deployment file.
-
For Ubuntu 16.04 and CentOS.
kubectl apply -f https://raw.githubusercontent.com/openebs/cstor-csi/master/deploy/csi-operator.yaml
-
For Ubuntu 18.04
kubectl apply -f https://raw.githubusercontent.com/openebs/cstor-csi/master/deploy/csi-operator-ubuntu-18.04.yaml
Verify that the OpenEBS CSI Components are installed.
$ kubectl get pods -n kube-system -l role=openebs-csi
NAME READY STATUS RESTARTS AGE
openebs-csi-controller-0 4/4 Running 0 6m14s
openebs-csi-node-56t5g 2/2 Running 0 6m13s
Provision a cStor volume using OpenEBS CStor CSI driver
-
Make sure you already have a cStor Pool Created or you can
create one using the below command. In the below cspc.yaml make sure
that the specified pools list should be greater than or equal to
the number of replicas required for the volume. Update kubernetes.io/hostname
and blockDeviceName
in the below yaml before applying the same.
The following command will create the specified cStor Pools in the cspc yaml:
kubectl apply -f https://raw.githubusercontent.com/openebs/cstor-csi/master/examples/cspc.yaml
-
Create a Storage Class to dynamically provision volumes
using OpenEBS CSI provisioner. A sample storage class looks like:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: openebs-csi-cstor-sparse
provisioner: cstor.csi.openebs.io
allowVolumeExpansion: true
parameters:
cas-type: cstor
cstorPoolCluster: cstor-sparse-cspc
replicaCount: "1"
You will need to specify the correct cStor CSPC from your cluster
and specify the desired replicaCount
for the volume. The replicaCount
should be less than or equal to the max pools available.
The following file helps you to create a Storage Class
using the cStor sparse pool created in the previous step.
kubectl apply -f https://raw.githubusercontent.com/openebs/cstor-csi/master/examples/csi-storageclass.yaml
-
Run your application by specifying the above Storage Class for
the PVCs.
The following example launches a busybox pod using a cStor Volume
provisioned via CSI Provisioner.
kubectl apply -f https://raw.githubusercontent.com/openebs/cstor-csi/master/examples/busybox-csi-cstor-sparse.yaml
Verify that the pods is running and is able to write the data.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 97s
The busybox is instructed to write the date when it starts into the
mounted path at /mnt/openebs-csi/date.txt
$ kubectl exec -it busybox -- cat /mnt/openebs-csi/date.txt
Wed Jul 31 04:56:26 UTC 2019
How does it work?
The following steps indicate the PV provisioning workflow as it passes
through various components.
-
Create PVC with Storage Class referring to OpenEBS CStor CSI Driver.
-
Kubernetes will pass the PV creation request to the OpenEBS
CSI Controller service via CreateVolume()
, as this controller
registered with Kubernetes for receiving any requests related to
cstor.csi.openebs.io
-
OpenEBS CSI Controller will create a custom resource called
CStorVolumeClaim(CVC)
and returns the details of the newly
created object back to Kubernetes. The CVC
s will be
monitored by the cstor-operator (embedded in m-apiserver). The
cstor-operator will wait to proceed with provisioning a CStorVolume
for a given CVC
until the Kubernetes has scheduled the application
using the PVC/CVC to a node in the cluster.
This is in effect working like waitforFirstConsumer
.
-
When the node is assigned for the application, Kubernetes will
invoke the NodePublishVolume()
request with the node and the
volume details - which includes the identifier of the CVC.
This API will then specify the node details in the CVC.
After updating the node id, the OpenEBS CStor CSI Driver - Node
Service will wait for the CVC to be bound to an actual cStor Volume.
-
The cstor-operator checks that node details are available on CVC,
and proceeds with the cStor Volume Creation. Once the cStor Volume
is created, the CVC is updated with the reference to the cStor Volume
and change the status on CVC to bound.
-
Node Component which was waiting on the CVC status will proceed
to connect to the cStor volume.
Note: While the asynchronous handling of the Volume provisioning is
in progress, the application pod may throw some errors like:
Waiting for CVC to be bound
: Implies volume components are still being created
Volume is not ready: Replicas yet to connect to controller
:
Implies volume components are already created but yet to interact with each other.
On successful completion of the above steps the application pod can
be seen in running state.
Expand a cStor volume using OpenEBS CStor CSI driver
Notes:
- Only dynamically provisioned volumes can be resized.
- You can only resize volumes containing a file system if the file system is ext4.
- Make sure that the storage class has the
allowVolumeExpansion
field set to true
when the volume is provisioned.
Steps:
- Update the increased pvc size in the pvc spec section (pvc.spec.resources.requests.storage).
- Wait for the updated capacity to reflect in PVC status (pvc.status.capacity.storage).
It is internally a two step process for volumes containing a file system:
- Volume expansion
- FileSystem expansion
Snapshot And Clone CStor Volume using OpenEBS CStor CSI Driver
Notes:
VolumeSnapshotDataSource
feature gate needs to be enabled at kubelet and kube-apiserver
Steps:
- Create snapshot class pointing to cstor csi driver:
kubectl apply -f https://raw.githubusercontent.com/openebs/cstor-csi/master/deploy/snapshot-class.yaml
- Create a snapshot after updating the PVC and snapshot name in the following yaml:
kubectl apply -f https://raw.githubusercontent.com/openebs/cstor-csi/master/examples/csi-snapshot.yaml
- Verify that the snapshot has been created successfully:
kubectl get volumesnapshots.snapshot
NAME AGE
demo-snapshot 3d1h
- Create the volume clone using the above Snapshot by updating and modifying the following yaml:
kubectl apply -f https://raw.githubusercontent.com/openebs/cstor-csi/master/examples/csi-pvc-clone.yaml
- Verify that the PVC has been successfully created:
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
demo-csivol-claim Bound pvc-52d88903-0518-11ea-b887-42010a80006c 5Gi RWO openebs-csi-cstor-sparse 3d1h
pvc-clone Bound pvc-2f2d65fc-0784-11ea-b887-42010a80006c 5Gi RWO openebs-csi-cstor-sparse 3s