Persistent NFS Storage for Kubernetes Deployments
This is post 2 of our kubernetes homelab guide with raspberry pi's and in this post I will demonstrate how to provide persistent storage to your pods by using a persistent volume backed by NFS.
NFS Server
If you don't have a NFS Server running already, you can follow my post on setting up a nfs server
Once your NFS Server is up and running, create the directory where we will store the data for our deployment:
# my nfs base directory is /data/nfs-storage
$ mkdir -p /data/nfs-storage/test
Kubernetes Persistent Volumes
First to create our kubernetes persistent volume and persistent volume claim. Below is the yaml for test-pv.yml
:
apiVersion: v1
kind: PersistentVolume
metadata:
name: test-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
nfs:
path: /test
server: 192.168.0.4
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: test-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Go ahead and create the persistent volume:
$ kubectl apply -f test-pv.yml
persistentvolume/test-pv created
persistentvolumeclaim/test-pvc created
View your persistent volume and persistent volume claim using kubectl:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/test-pv 1Gi RWO Retain Bound default/test-pvc 81s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-pvc Bound test-pv 1Gi RWO local-path 81s
Kubernetes Deployment
This is a simple pod that just sleeps for a long while, so the intention is to exec into our pod, create some data, then kill the pod so that our pod can be re-provisioned again and determine if the data is persisted inside the pod.
Our test.yml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: busybox
command: ["sleep"]
args: ["100000"]
volumeMounts:
- name: storage
mountPath: /data
volumes:
- name: storage
persistentVolumeClaim:
claimName: test-pvc
Create the deployment:
$ kubectl apply -f test.yml
deployment.apps/test created
View the deployment:
$ kubectl get deployment/test
NAME READY UP-TO-DATE AVAILABLE AGE
test 1/1 1 1 4m39s
And the pods:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-6cc6c6898f-bndmb 1/1 Running 0 5m17s
Exec into the pod:
$ kubectl exec -it pod/test-6cc6c6898f-bndmb sh
/ #
When we list our mounted path /data
we can see its mounted to NFS:
$ df -h /data
Filesystem Size Used Available Use% Mounted on
192.168.0.4:/test 4.5T 2.8T 1.4T 67% /data
Write data to the nfs mounted partition:
$ echo $(hostname) > /data/hostname.txt
$ cat /data/hostname.txt
test-6cc6c6898f-bndmb
Now exit the container, and delete the pod:
$ kubectl delete pod/test-6cc6c6898f-bndmb
pod "test-6cc6c6898f-bndmb" deleted
Give it some time, and list pods again, and you should see a new pod appearing:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-6cc6c6898f-bqcrk 1/1 Running 0 34s
Exec into that container and see if the data is persisted:
$ hostname
test-6cc6c6898f-bqcrk
$ cat /data/hostname.txt
test-6cc6c6898f-bndmb
And we can see that the data for our deployment persisted.
Thank You
Thank you for reading, hope that was useful.