GCP Filestore as a Persistent Storage in Google Kubernetes Engine clusters

Upendra Kumarage
5 min readJun 14, 2020

When it comes to storage of containerized applications, we have varied set of use cases and requirements on using the storage. As we know, the data stored inside a container are ephemeral and destroyed upon the deletion or restart of the container. If we are to run a containerized application in a production environment, their might be a requirement to retain data in case of a container restart or deletion of a pod.

This is where the requirement of a Persistent Storage comes in. We have several options when it comes to addressing the Persistent Storage in Kubernetes. There are Persistent volume types in Kubernetes and they are being implemented via plugins in Kubernetes. GCEPersistentDisk, AWSElasticBlockStore, NFS, Glusterfs are some examples for some of these plugins.

So, GCP Filestore? I will go with direct introduction provided in the Filestore documentations,

Filestore instances are fully managed NFS file servers on Google Cloud for use with applications running on Compute Engine virtual machines instances or Google Kubernetes Engine clusters.

As per the above definition, GCP Filestore can be simply cited as a Managed NFS file storage provided by GCP. Network File System (NFS) is a standard protocol that allows to mount a storage device as a local drive. As mentioned previously, in Kubernetes we can use NFS plugin to implement persistent volumes. As I see it, this is one of the best Storage option to be used with GKE. Only down side of the Filestore is the storage restriction. Filestore comes with minimum 1TB storage volume. If one is willing to accept the cost factor, then it is suggested to use Filestore as the Persistent Storage for production grade GKE clusters. The other advantage is, with this we can have the option of multiple reads and writes on the attached fileshare volume.

Getting Started with GCP Filestore

I am going to skip the part where how we are going to create a GCP project, create a Kubernetes Cluster and other related stuff as I am assuming that the readers would be familiar with GKE and Kubernetes Volumes . If anyone like to get more information on how Storage works in Kubernetes, I would like to suggest starting from Kubernetes documentations first as they contain large amount of information including practical aspects.

As the first step, we have to enable the Filestore API in GCP. We will be getting a page similar to following where we can enable Filestore API,

Enable Filestore API

After we have enabled the Filestore API ( this would take several minutes to be completed ), we will redirected to a page similar as follows where we will get the create instance option.

Create Instance Option

Once we click on the create instance option, we will be getting a page as follows where we have to fill up the required fields,

Create a Filestore Instance

In here, it is recommend to use a standard naming convention. We can name the instance according to the practice we use to name VMs and Kubernetes Node Pools in the production environment. Instance tier is to be selected up on our requirement and it would be a cost factor as well. The next most important this is the Authorized Network, here we have to provide the VPC network where the Kubernetes Cluster is deployed in unless we are using a shared VPC which can be used in cross access of resources. The Region and the Zone can be selected base on our current production deployment or in a different location as well. I will not be changing the IP address and will be using the default created one as it is recommended otherwise there is a specific requirement.

The File Share Name can also be given based on our standard naming convention and as I mentioned earlier the minimum capacity is 1 TB which is the only downside of Filestore. Once the required details are completed and Filestore is created ( this would take several minutes )we can access the Filestore information where we can get the information such as IP address and so forth.

Use the create Filestore as a Persistent Storage in GKE

In order to use the created Filestore Volume as a Persistent Storage in GKE, first we have to create a Kubernetes Persistent Volume. We can use the following template to create a Kubernetes Persistent Volume,

nfs-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
name: filestore-nfs-pv
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
nfs:
path: /volume1
server: 10.x.x.x

Note the kind in above template, PersistentVolume. We can define the storage limit under spec.capacity.storage, the fileshare volume name under spec.nfs.path and IP address under spec.nfs.server. We can create the PV using the command,

kubectl create -f nfs-pv.yaml

Once the PV is created, then the next step is to create a Kubernetes Persistent Volume Claim. The Pods will be accessing the resources in the created PVC. The relationship between the PV and the PVC is one to one so one PV can only have one PVC. We can use the following template to create a PVC,

nfs-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: filestore-nfs-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: filestore-nfs-pv
resources:
requests:
storage: 20Gi

In the above template also, we can adhere to the standard naming practices we use in production environment. We can create the PVC using the command,

kubectl create -f nfs-pvc.yaml

We can check the created PV and PVC using the commands,

kubectl get pv
kubectl get pvc

Now we have completed creating a persistent volume and persistent volume claim. As the final step, we are going to mount the created PVC to a sample pod to test the accessibility. To do this, we are going to use the following sample nginx Deployment template to create a Pod and mount the created volume.

sample-nginx-pod.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
volumeMounts:
- mountPath: /mnt/shared-files
name: nfs-pvc
ports:
- containerPort: 80
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: filestore-nfs-pvc
readOnly: false

In the above template, we have use the directive spec.containers.volumeMounts and spec.containers.volumes to tell where to be mounted in the pod and what PVC is to be used respectively. To create the Deployment, we can use the command,

kubectl create -f sample-nginx-pod.yaml

Once the pod is created, we can check the mount point by accessing the created pod. Furthermore we can copy some content or create content within the shared location and test the consistency of the created files via Pods created from another Deployment using the same mount point.

References

[1] https://cloud.google.com/filestore/docs

[2] https://cloud.google.com/filestore/docs/accessing-fileshares

[3] https://kubernetes.io/docs/concepts/storage/persistent-volumes/

[4] https://kubernetes.io/docs/concepts/storage/persistent-volumes/

[5] https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes

--

--

Upendra Kumarage

Cloud & DevOps enthusiast, Cloud Operations professional