gcp-filestore-csi-driver
Google Cloud Filestore CSI driver for
use in Kubernetes and other container orchestrators.
Disclaimer: Deploying this driver manually is not an officially supported Google
product. For a fully managed and supported filestore experience on kubernetes,
use GKE with the managed filestore driver.
Project Overview
This driver allows volumes backed by Google Cloud Filestore instances to be
dynamically created and mounted by workloads.
Project Status
Status: GA
Latest image: registry.k8s.io/cloud-provider-gcp/gcp-filestore-csi-driver:v1.7.0
Also see known issues and CHANGELOG.
The manifest bundle which captures all the driver components (driver pod which includes the containers csi-external-provisioner, csi-external-resizer, csi-external-snapshotter, gcp-filestore-driver, csi-driver-registrar, csi driver object, rbacs, pod security policies etc) can be picked up from the master branch overlays directory. We structure the overlays directory per minor version of kubernetes because not all driver components can be used with all kubernetes versions. For example volume snapshots are supported 1.17+ kubernetes versions thus stable-1-16 driver manifests does not contain the snapshotter sidecar. Read more about overlays here.
Example:
stable-1-19
overlays bundle can be used to deploy all the components of the driver on kubernetes 1.19.
stable-master
overlays bundle can be used to deploy all the components of the driver on kubernetes master.
CSI Compatibility
This plugin is compatible with CSI version 1.3.0.
Plugin Features
Supported CreateVolume parameters
This version of the driver creates a new Cloud Filestore instance per
volume. Customizable parameters for volume creation include:
Parameter |
Values |
Default |
Description |
tier |
"standard"/"basic_hdd" "premium"/"basic_ssd" "enterprise" "high_scale_ssd"/"zonal" |
"standard" |
storage performance tier |
network |
string |
"default" |
VPC name. When using "PRIVATE_SERVICE_ACCESS" connect-mode, network needs to be the full VPC name. |
reserved-ipv4-cidr |
string |
"" |
CIDR range to allocate Filestore IP Ranges from. The CIDR must be large enough to accommodate multiple Filestore IP Ranges of /29 each, /26 if enterprise tier is used. |
reserved-ip-range |
string |
"" |
IP range to allocate Filestore IP Ranges from. This flag is used instead of "reserved-ipv4-cidr" when "connect-mode" is set to "PRIVATE_SERVICE_ACCESS" and the value must be an allocated IP address range. The IP range must be large enough to accommodate multiple Filestore IP Ranges of /29 each, /26 if enterprise tier is used. |
connect-mode |
"DIRECT_PEERING" "PRIVATE_SERVICE_ACCESS" |
"DIRECT_PEERING" |
The network connect mode of the Filestore instance. To provision Filestore instance with shared-vpc from service project, PRIVATE_SERVICE_ACCESS mode must be used. |
instance-encryption-kms-key |
string |
"" |
Fully qualified resource identifier for the key to use to encrypt new instances. |
For Kubernetes clusters, these parameters are specified in the StorageClass.
Note that non-default networks require extra firewall setup
Current supported Features
-
Volume resizing: CSI Filestore driver supports volume expansion for all supported Filestore tiers. See user-guide here. Volume expansion feature is beta in kubernetes 1.16+.
-
Labels: Filestore supports labels per instance, which is a map of key value pairs. Filestore CSI driver enables user provided labels
to be stamped on the instance. User can provide labels by using 'labels' key in StorageClass.parameters. In addition, Filestore instance can
be labelled with information about what PVC/PV the instance was created for. To obtain the PVC/PV information, '--extra-create-metadata' flag needs to be set on the CSI external-provisioner sidecar. User provided label keys and values must comply with the naming convention as specified here. Please see this storage class examples to apply custom user-provided labels to the Filestore instance.
-
Topology preferences: Filestore performance and network usage is affected by topology. For example, it is recommended to run
workloads in the same zone where the Cloud Filestore instance is provisioned in. The following table describes how provisioning can be tuned by topology. The volumeBindingMode is specified in the StorageClass used for provisioning. 'strict-topology' is a flag passed to the CSI provisioner sidecar. 'allowedTopology' is also specified in the StorageClass. The Filestore driver will use the first topology in the preferred list, or if empty the first in the requisite list. If topology feature is not enabled in CSI provisioner (--feature-gates=Topology=false), CreateVolume.accessibility_requirements will be nil, and the driver simply creates the instance in the zone where the driver deployment running. See user-guide here. Topology feature is GA in kubernetes 1.17+.
SC Bind Mode |
'strict-topology' |
SC allowedTopology |
CSI provisioner Behavior |
WaitForFirstCustomer |
true |
Present |
If the topology of the node selected by the schedule is not in allowedTopology, provisioning fails and the scheduler will continue with a different node. Otherwise, CreateVolume is called with requisite and preferred topologies set to that of the selected node |
WaitForFirstCustomer |
false |
Present |
If the topology of the node selected by the schedule is not in allowedTopology, provisioning fails and the scheduler will continue with a different node. Otherwise, CreateVolume is called with requisite set to allowedTopology and preferred set to allowedTopology rearranged with the selected node topology as the first parameter |
WaitForFirstCustomer |
true |
Not Present |
Call CreateVolume with requisite set to selected node topology, and preferred set to the same |
WaitForFirstCustomer |
false |
Not Present |
Call CreateVolume with requisite set to aggregated topology across all nodes, which matches the topology of the selected node, and preferred is set to the sorted and shifted version of requisite, with selected node topology as the first parameter |
Immediate |
N/A |
Present |
Call CreateVolume with requisite set to allowedTopology and preferred set to the sorted and shifted version of requisite at a randomized index |
Immediate |
N/A |
Not Present |
Call CreateVolume with requisite = aggregated topology across nodes which contain the topology keys of CSINode objects, preferred = sort and shift requisite at a randomized index |
-
Volume Snapshot: The CSI driver currently supports CSI VolumeSnapshots on a GCP Filestore instance using the GCP Filestore Backup feature. CSI VolumeSnapshot is a Beta feature in k8s enabled by default in 1.17+. The GCP Filestore Snapshot alpha is not currently supported, but will be in the future via the type parameter in the VolumeSnapshotClass. For more details see the user-guide here.
-
Volume Restore: The CSI driver supports out-of-place restore of new GCP Filestore instance from a given GCP Filestore Backup. See user-guide restore steps here and GCP Filestore Backup restore documentation here. This feature needs kubernetes 1.17+.
-
Pre-provisioned Filestore instance: Pre-provisioned filestore instances can be leveraged and consumed by workloads by mapping a given filestore instance to a PersistentVolume and PersistentVolumeClaim. See user-guide here and filestore documentation here
-
FsGroup: CSIVolumeFSGroupPolicy is a Kubernetes feature in Beta is 1.20, which allows CSI drivers to opt into FSGroup policies. The stable-master overlay of Filestore CSI driver now supports this. See the user-guide here on how to apply fsgroup to volumes backed by filestore instances. For a workaround to apply fsgroup on clusters 1.19 (with CSIVolumeFSGroupPolicy feature gate disabled), and clusters <= 1.18 see user-guide here
-
Resource Tags: Filestore supports resource tags for instance and backup resources, which is a map of key value pairs. Filestore CSI driver enables user defined tags to be attached to instance and backup resources created by the driver.
User can provide resource tags by using resource-tags
key in StorageClass.parameters or using the --resource-tags
command line option, and the tags should be defined as comma separated values of the form <parent_id>/<tagKey_shortname>/<tagValue_shortname>
where, parentID is the ID of Organization or Project resource where tag key and tag value resources exist, tagKey_shortname is the shortName of the tag key resource, tagValue_shortname is the shortName of the tag value resource and a maximum of 50 tags can be attached to per resource. See https://cloud.google.com/resource-manager/docs/tags/tags-creating-and-managing for more details.
Please see storage class example to define resource tags to be attached to the Filestore instance resources.
Future Features
- Non-root access: By default, GCFS instances are only writable by the root user
and readable by all users. Provide a CreateVolume parameter to set non-root
owners.
- Subdirectory provisioning: Given an existing Cloud Filestore instance, provision a
subdirectory as a volume. This provisioning mode does not provide capacity
isolation. Quota support needs investigation. For now, the
nfs-client
external provisioner can be used to provide similar functionality for
Kubernetes clusters.
- Windows support. The current version of the driver supports volumes mounted to Linux nodes only.
Deploying the Driver
- Clone the repository in cloudshell using following commands
mkdir -p $GOPATH/src/github.com/kubernetes-sigs
cd $GOPATH/src/github.com/kubernetes-sigs
git clone https://github.com/kubernetes-sigs/gcp-filestore-csi-driver.git
- Set up a service account with appropriate role binds, and download a service account key. This
service account will be used by the driver to provision Filestore instances and otherwise access
GCP APIs. This can be done by running
./deploy/project_setup.sh
and pointing to a directory to
store the SA key. To prevent your key from leaking do not make this directory publicly
accessible!
$ PROJECT=<your-gcp-project> GCFS_SA_DIR=<your-directory-to-store-credentials-by-default-home-dir> ./deploy/project_setup.sh
-
Choose a stable overlay that matches your cluster version, eg stable-1-19
. If you are running a
more recent cluster version than given here, use stable-master
. The prow-*
overlays are for
testing, and the dev
overlay is for driver development. ./deploy/kubernetes/cluster_setup.sh
will install the driver pods, as well as necessary RBAC and resources.
-
If deploying new changes in the master branch update the overlay file with a new custom tag to identify this image.
apiVersion: builtin
kind: ImageTagTransformer
metadata:
name: imagetag-gce-fs-driver
imageTag:
name: k8s.gcr.io/cloud-provider-gcp/gcp-filestore-csi-driver
newName: gcr.io/<your-project>/gcp-filestore-csi-driver # Add newName
newTag: "<your-custom-tag>" # Change to your custom tag
Make and build the image if deploying new master branch.
GCP_FS_CSI_STAGING_VERSION=<your-custom-tag> GCP_FS_CSI_STAGING_IMAGE=gcr.io/<your-project>/gcp-filestore-csi-driver make build-image-and-push
Once the image is pushed it can be verified by visiting https://pantheon.corp.google.com/gcr/images/<your-project>/global/gcp-filestore-csi-driver
$ PROJECT=<your-gcp-project> DEPLOY_VERSION=<your-overlay-choice> GCFS_SA_DIR=<your-directory-to-store-credentials-by-default-home-dir> ./deploy/kubernetes/cluster_setup.sh
After this, the driver can be used. See ./docs/kubernetes
for further instructions and
examples.
- For cleanup of the driver run the following:
$ PROJECT=<your-gcp-project> DEPLOY_VERSION=<your-overlay-choice> ./deploy/kubernetes/cluster_cleanup.sh
Kubernetes Development
- Set up a service account. Most development uses the
dev
overlay,
where a service account key is not needed. Otherwise use GCFS_SA_DIR
as described above.
$ PROJECT=<your-gcp-project> DEPLOY_VERSION=dev ./deploy/project_setup.sh
- To build the Filestore CSI latest driver image and push to a container registry.
$ PROJECT=<your-gcp-project> make build-image-and-push
-
The base manifests like core driver manifests, rbac role bindings are listed under here.
The overlays (e.g prow-gke-release-staging-head, prow-gke-release-staging-rc-{k8s version}, stable-{k8s version}, dev) are listed under deploy/kubernetes/overlays
apply transformations on top of the base manifests.
-
'dev' overlay uses default service account for communicating with GCP services. https://www.googleapis.com/auth/cloud-platform
scope allows full access to all Google Cloud APIs and given node scope will allow any pod to reach GCP services as the provided service account, and so should only be used for testing and development, not production clusters. cluster_setup.sh installs kustomize and creates the driver manifests package and deploys to the cluster. Bring up GCE cluster with following:
$ NODE_SCOPES=https://www.googleapis.com/auth/cloud-platform KUBE_GCE_NODE_SERVICE_ACCOUNT=<SERVICE_ACCOUNT_NAME>@$PROJECT.iam.gserviceaccount.com kubetest --up
$ PROJECT=<your-gcp-project> DEPLOY_VERSION=dev ./deploy/kubernetes/cluster_setup.sh
Gcloud Application Default Credentials and scopes
See here, here and here
Filestore IAM roles and permissions
See here
Driver Release [Google internal only]