README ¶
NexentaStor CSI Driver over iSCSI
NexentaStor product page: https://nexenta.com/products/nexentastor.
This is a development branch, for the most recent stable version see "Supported versions".
Overview
The NexentaStor Container Storage Interface (CSI) Driver provides a CSI interface used by Container Orchestrators (CO) to manage the lifecycle of NexentaStor volumes over iSCSI protocol.
Supported kubernetes versions matrix
NexentaStor 5.1 | NexentaStor 5.2 | NexentaStor 5.3 | |
---|---|---|---|
Kubernetes >=1.17 | 1.0.0 | 1.0.0 | 1.0.0 |
Kubernetes >=1.17 | 1.1.0 | 1.1.0 | 1.1.0 |
Kubernetes >=1.17 | 1.2.0 | 1.2.0 | 1.2.0 |
Kubernetes >=1.19 | 1.3.0 | 1.3.0 | 1.3.0 |
Kubernetes >=1.19 | master | master | master |
Releases can be found here - https://github.com/Nexenta/nexentastor-csi-driver-block/releases
Feature List
Feature | Feature Status | CSI Driver Version | CSI Spec Version | Kubernetes Version |
---|---|---|---|---|
Static Provisioning | GA | >= v1.0.0 | >= v1.0.0 | >=1.13 |
Dynamic Provisioning | GA | >= v1.0.0 | >= v1.0.0 | >=1.13 |
RW mode | GA | >= v1.0.0 | >= v1.0.0 | >=1.13 |
RO mode | GA | >= v1.0.0 | >= v1.0.0 | >=1.13 |
Creating and deleting snapshot | GA | >= v1.0.0 | >= v1.0.0 | >=1.17 |
Provision volume from snapshot | GA | >= v1.0.0 | >= v1.0.0 | >=1.17 |
Provision volume from another volume | GA | >= v1.1.0 | >= v1.0.0 | >=1.17 |
List snapshots of a volume | Beta | >= v1.0.0 | >= v1.0.0 | >=1.17 |
Expand volume | GA | >= v1.1.0 | >= v1.1.0 | >=1.16 |
Topology | Beta | >= v1.1.0 | >= v1.0.0 | >=1.17 |
Raw block device | GA | >= v1.0.0 | >= v1.0.0 | >=1.14 |
StorageClass Secrets | Beta | >= v1.0.0 | >=1.0.0 | >=1.13 |
Requirements
- Kubernetes cluster must allow privileged pods, this flag must be set for the API server and the kubelet
(instructions):
--allow-privileged=true
- Required the API server and the kubelet feature gates
(instructions):
If you are planning on using topology, the following feature-gates are required--feature-gates=VolumeSnapshotDataSource=true,VolumePVCDataSource=true,ExpandInUsePersistentVolumes=true,ExpandCSIVolumes=true,ExpandPersistentVolumes=true,Topology=true,CSINodeInfo=true
ServiceTopology=true,CSINodeInfo=true
- Mount propagation must be enabled, the Docker daemon for the cluster must allow shared mounts
(instructions)
apt install -y open-iscsi
Installation
-
Create NexentaStor volumeGroup for the driver, example:
csiDriverPool/csiDriverVolumeGroup
. By default, the driver will create filesystems in this volumeGroup and mount them to use as Kubernetes volumes. -
Clone driver repository
git clone https://github.com/Nexenta/nexentastor-csi-driver-block.git cd nexentastor-csi-driver-block git checkout master
-
Edit
deploy/kubernetes/nexentastor-csi-driver-block-config.yaml
file. Driver configuration example:nexentastor_map: nstor-box1: restIp: https://10.3.199.252:8443,https://10.3.199.253:8443 # [required] NexentaStor REST API endpoint(s) username: admin # [required] NexentaStor REST API username password: Nexenta@1 # [required] NexentaStor REST API password defaultDataIp: 10.3.1.1 # default NexentaStor data IP or HA VIP defaultVolumeGroup: csiDriverPool/csiVolumeGroup # default volume group for driver's volumes [pool/volumeGroup] defaultTargetGroup: tg1 # [required] NexentaStor iSCSI target group name defaultTarget: iqn.2005-07.com.nexenta:01:test # [required] NexentaStor iSCSI target defaultHostGroup: all # [required] NexentaStor host group nstor-slow: restIp: https://10.3.4.4:8443,https://10.3.4.5:8443 # [required] NexentaStor REST API endpoint(s) username: admin # [required] NexentaStor REST API username password: Nexenta@1 # [required] NexentaStor REST API password defaultDataIp: 10.3.1.2 # default NexentaStor data IP or HA VIP defaultVolumeGroup: csiDriverPool/csiVolumeGroup2 # default volume group for driver's volumes [pool/volumeGroup] defaultTargetGroup: tg1 # [required] NexentaStor iSCSI target group name defaultTarget: iqn.2005-07.com.nexenta:01:test # [required] NexentaStor iSCSI target defaultHostGroup: all # [required] NexentaStor host group
Note: keyword nexentastor_map followed by cluster name of your choice MUST be used even if you are only using 1 NexentaStor cluster.
All driver configuration options:
Name Description Required Example restIp
NexentaStor REST API endpoint(s); ,
to separate cluster nodesyes https://10.3.3.4:8443
username
NexentaStor REST API username yes admin
password
NexentaStor REST API password yes p@ssword
defaultVolumeGroup
parent volumeGroup for driver's filesystemes [pool/volumeGroup] yes csiDriverPool/csiDriverVolumeGroup
defaultHostGroup
NexentaStor host group to map volumes no all
defaultTarget
NexentaStor iSCSI target iqn yes iqn.2005-07.com.nexenta:01:csiTarget1
defaultTargetGroup
NexentaStor target group name yes CSI-tg1
defaultDataIp
NexentaStor data IP or HA VIP for mounting shares yes for PV 20.20.20.21
dynamicTargetLunAllocation
If true driver will automatically manage iSCSI target and targetgroup creation (default: false) no true
numOfLunsPerTarget
Maximum number of luns that can be assigned to each target with dynamicTargetLunAllocation no 256
useChapAuth
CHAP authentication for iSCSI targets no true
chapUser
Username for CHAP authentication no admin
chapSecret
Password/secret for CHAP authentication. Minimun length is 12 symbols yes when useChapSecret is true
verysecretpassword
debug
print more logs (default: false) no true
zone
Zone to match topology.kubernetes.io/zone. no us-west
Note: if parameter
defaultVolumeGroup
/defaultDataIp
is not specified in driver configuration, then parametervolumeGroup
/dataIp
must be specified in StorageClass configuration.Note: all default parameters (
default*
) may be overwritten in specific StorageClass configuration. -
Create Kubernetes secret from the file:
kubectl create secret generic nexentastor-csi-driver-block-config --from-file=deploy/kubernetes/nexentastor-csi-driver-block-config.yaml
-
Register driver to Kubernetes:
kubectl apply -f deploy/kubernetes/nexentastor-csi-driver-block.yaml
-
For snapshotting capabilities additional CRDs must be installed once per cluster and external-snapshotter deployed:
kubectl apply -f deploy/kubernetes/snapshots/crds.yaml
kubectl apply -f deploy/kubernetes/snapshots/snapshotter.yaml
Configuring multiple controller volume replicas We can configure this by changing the deploy/kubernetes/nexentastor-csi-driver-block.yaml:
change the following line in controller service config
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: nexentastor-block-csi-controller
spec:
serviceName: nexentastor-block-csi-controller-service
replicas: 1 # Change this to 2 or more.
NexentaStor CSI driver's pods should be running after installation:
$ kubectl get pods
nexentastor-block-csi-controller-0 4/4 Running 0 23h
nexentastor-block-csi-controller-1 4/4 Running 0 23h
nexentastor-block-csi-node-6cmsj 2/2 Running 0 23h
nexentastor-block-csi-node-wcrgk 2/2 Running 0 23h
nexentastor-block-csi-node-xtmgv 2/2 Running 0 23h
Usage
Dynamically provisioned volumes
For dynamic volume provisioning, the administrator needs to set up a StorageClass pointing to the driver.
In this case Kubernetes generates volume name automatically (for example pvc-ns-cfc67950-fe3c-11e8-a3ca-005056b857f8
).
Default driver configuration may be overwritten in parameters
section:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nexentastor-csi-driver-block-sc-nginx-dynamic
provisioner: nexentastor-csi-driver.nexenta.com
mountOptions: # list of options for `mount -o ...` command
# - noatime #
#- matchLabelExpressions: # use to following lines to configure topology by zones
# - key: topology.kubernetes.io/zone
# values:
# - us-east
parameters:
#configName: nstor-slow # specify exact NexentaStor appliance that you want to use to provision volumes.
#volumeGroup: customPool/customvolumeGroup # to overwrite "defaultVolumeGroup" config property [pool/volumeGroup]
#dataIp: 20.20.20.253 # to overwrite "defaultDataIp" config property
Parameters
Name | Description | Example |
---|---|---|
volumeGroup |
parent volumeGroup for driver's filesystems [pool/volumeGroup] | customPool/customvolumeGroup |
dataIp |
NexentaStor data IP or HA VIP for mounting shares | 20.20.20.253 |
configName |
name of NexentaStor appliance from config file | nstor-ssd |
Example
Run Nginx pod with dynamically provisioned volume:
kubectl apply -f examples/kubernetes/nginx-dynamic-volume.yaml
# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-dynamic-volume.yaml
Pre-provisioned volumes
The driver can use already existing NexentaStor filesystem, in this case, StorageClass, PersistentVolume and PersistentVolumeClaim should be configured.
StorageClass configuration
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nexentastor-csi-driver-block-sc-nginx-persistent
provisioner: nexentastor-csi-driver.nexenta.com
mountOptions: # list of options for `mount -o ...` command
# - noatime #
parameters:
#volumeGroup: customPool/customvolumeGroup # to overwrite "defaultVolumeGroup" config property [pool/volumeGroup]
#dataIp: 20.20.20.253 # to overwrite "defaultDataIp" config property
PersistentVolume configuration
apiVersion: v1
kind: PersistentVolume
metadata:
name: nexentastor-csi-driver-block-pv-nginx-persistent
labels:
name: nexentastor-csi-driver-block-pv-nginx-persistent
spec:
storageClassName: nexentastor-csi-driver-block-sc-nginx-persistent
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
csi:
driver: nexentastor-csi-driver.nexenta.com
volumeHandle: nstor-ssd:csiDriverPool/csiDriverVolumeGroup/nginx-persistent
#mountOptions: # list of options for `mount` command
# - noatime #
CSI Parameters:
Name | Description | Example |
---|---|---|
driver |
installed driver name "nexentastor-csi-driver.nexenta.com" | nexentastor-csi-driver.nexenta.com |
volumeHandle |
NS appliance name from config and path to existing NexentaStor filesystem [configName:pool/volumeGroup/filesystem] | nstor-ssd:PoolA/volumeGroupA/nginx |
PersistentVolumeClaim (pointed to created PersistentVolume)
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexentastor-csi-driver-block-pvc-nginx-persistent
spec:
storageClassName: nexentastor-csi-driver-block-sc-nginx-persistent
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
# to create 1-1 relationship for pod - persistent volume use unique labels
name: nexentastor-csi-driver-block-sc-nginx-persistent
Example
Run nginx server using PersistentVolume.
Note: Pre-configured filesystem should exist on the NexentaStor:
csiDriverPool/csiDriverVolumeGroup/nginx-persistent
.
kubectl apply -f examples/kubernetes/nginx-persistent-volume.yaml
# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-persistent-volume.yaml
Cloned volumes
We can create a clone of an existing csi volume.
To do so, we need to create a PersistentVolumeClaim with dataSource spec pointing to an existing PVC that we want to clone.
In this case Kubernetes generates volume name automatically (for example pvc-ns-cfc67950-fe3c-11e8-a3ca-005056b857f8
).
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexentastor-csi-driver-block-pvc-nginx-dynamic-clone
spec:
storageClassName: nexentastor-csi-driver-block-sc-nginx-dynamic
dataSource:
kind: PersistentVolumeClaim
apiGroup: ""
name: nexentastor-csi-driver-block-sc-nginx-dynamic # pvc name
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Example
Run Nginx pod with dynamically provisioned volume:
kubectl apply -f examples/kubernetes/nginx-clone-volume.yaml
# to delete this pod:
kubectl delete -f examples/kubernetes/nginx-clone-volume.yaml
Snapshots
Note: this feature is an alpha feature.
# create snapshot class
kubectl apply -f examples/kubernetes/snapshot-class.yaml
# take a snapshot
kubectl apply -f examples/kubernetes/take-snapshot.yaml
# deploy nginx pod with volume restored from a snapshot
kubectl apply -f examples/kubernetes/nginx-snapshot-volume.yaml
# snapshot classes
kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
# snapshot list
kubectl get volumesnapshots.snapshot.storage.k8s.io
# snapshot content list
kubectl get volumesnapshotcontents.snapshot.storage.k8s.io
CHAP authentication
To use iSCSI CHAP authentication, configure your iSCSI client's initiator username and password on each kubernetes node. Example for Ubuntu 18.04:
- Open iscsid config and set username(optional) and password for session auth (note that minimum lentgh for password is 12):
vi /etc/iscsi/iscsid.conf
node.session.auth.username = admin
node.session.auth.password = supersecretpassword
systemctl restart iscsid.service
Now that your client is configured, add according values to driver's config or storageClass (see examples/nginx-dynamic-volume-chap):
useChapAuth: true
chapUser: admin
chapSecret: supersecretpassword
Uninstall
Using the same files as for installation:
# delete driver
kubectl delete -f deploy/kubernetes/nexentastor-csi-driver-block.yaml
# delete secret
kubectl delete secret nexentastor-csi-driver-block-config
Troubleshooting
- Show installed drivers:
kubectl get csidrivers kubectl describe csidrivers
- Error:
Make sure kubelet configured withMountVolume.MountDevice failed for volume "pvc-ns-<...>" : driver name nexentastor-csi-driver.nexenta.com not found in the list of registered CSI drivers
--root-dir=/var/lib/kubelet
, otherwise update paths in the driver yaml file (all requirements). - "VolumeSnapshotDataSource" feature gate is disabled:
vim /var/lib/kubelet/config.yaml # ``` # featureGates: # VolumeSnapshotDataSource: true # ``` vim /etc/kubernetes/manifests/kube-apiserver.yaml # ``` # - --feature-gates=VolumeSnapshotDataSource=true # ```
- Driver logs
kubectl logs --all-containers $(kubectl get pods | grep nexentastor-block-csi-controller | awk '{print $1}') -f kubectl logs --all-containers $(kubectl get pods | grep nexentastor-block-csi-node | awk '{print $1}') -f
- Show termination message in case driver failed to run:
kubectl get pod nexentastor-csi-block-controller-0 -o go-template="{{range .status.containerStatuses}}{{.lastState.terminated.message}}{{end}}"
- Configure Docker to trust insecure registries:
# add `{"insecure-registries":["10.3.199.92:5000"]}` to: vim /etc/docker/daemon.json service docker restart
Development
Commits should follow Conventional Commits Spec.
Commit messages which include feat:
and fix:
prefixes will be included in CHANGELOG automatically.
Build
# print variables and help
make
# build go app on local machine
make build
# build container (+ using build container)
make container-build
# update deps
~/go/bin/dep ensure
Run
Without installation to k8s cluster only version command works:
./bin/nexentastor-csi-driver-block --version
Publish
# push the latest built container to the local registry (see `Makefile`)
make container-push-local
# push the latest built container to hub.docker.com
make container-push-remote
Tests
test-all-*
instructions run:
- unit tests
- CSI sanity tests from https://github.com/kubernetes-csi/csi-test
- End-to-end driver tests with real K8s and NS appliances.
See Makefile for more examples.
# Test options to be set before run tests:
# - NOCOLORS=true # to run w/o colors
# - TEST_K8S_IP=10.3.199.250 # e2e k8s tests
# run all tests using local registry (`REGISTRY_LOCAL` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-local-image
# run all tests using hub.docker.com registry (`REGISTRY` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-remote-image
# run tests in container:
# - RSA keys from host's ~/.ssh directory will be used by container.
# Make sure all remote hosts used in tests have host's RSA key added as trusted
# (ssh-copy-id -i ~/.ssh/id_rsa.pub user@host)
#
# run all tests using local registry (`REGISTRY_LOCAL` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-local-image-container
# run all tests using hub.docker.com registry (`REGISTRY` in `Makefile`)
TEST_K8S_IP=10.3.199.250 make test-all-remote-image-container
End-to-end K8s test parameters:
# Tests install driver to k8s and run nginx pod with mounted volume
# "export NOCOLORS=true" to run w/o colors
go test tests/e2e/driver_test.go -v -count 1 \
--k8sConnectionString="root@10.3.199.250" \
--k8sDeploymentFile="../../deploy/kubernetes/nexentastor-csi-driver-block.yaml" \
--k8sSecretFile="./_configs/driver-config-single-default.yaml"
All development happens in master
branch,
when it's time to publish a new version,
new git tag should be created.
-
Build and test the new version using local registry:
# build development version: make container-build # publish to local registry make container-push-local # test plugin using local registry TEST_K8S_IP=10.3.199.250 make test-all-local-image-container
-
To release a new version run command:
VERSION=X.X.X make release
This script does following:
- generates new
CHANGELOG.md
- builds driver container 'nexentastor-csi-driver-block'
- Login to hub.docker.com will be requested
- publishes driver version 'nexenta/nexentastor-csi-driver-block:X.X.X' to hub.docker.com
- creates new Git tag 'vX.X.X' and pushes to the repository.
- generates new
-
Update Github releases.