NFS server and client are working but data is not on the server?
I have an inhouse Kubernetes cluster running on bare-metal and it consists of 5 nodes (1 master and 4 workers). I set up an NFS server on the master natively and launched the nfs-client in the K8s to have nfs dynamic provisioner. Everything are working properly and I am able to use my applications just by defining a persistent volume claim BUT I can't find my data on the disk.
Every time I launch an application, the nfs-client creates a new directory at the path of my nfs server with the correct name but all of these directories are empty. So my question is where are my data?
I am using the helm chart of the nfs client. This is an example of the created but empty directory at my nfs server path:
/var/nfs/general$ tree
.
├── 166-postgres-claim-pvc-37146254-db50-4293-a9f7-13097689610a
│ └── data
├── 166-registry-claim-pvc-fe337e34-d9a5-4266-8178-f67973894584
├── 166-registry-slave-claim-registry-slave-0-pvc-b18d430b-e1fc-4eeb-bd12-cab9340bed69
├── 166-rtspdata-claim-pvc-bf9bc1e3-412f-4627-ade4-50817478308e
├── 172-postgres-claim-pvc-087538cf-5b67-4789-8d8b-117d41c3fe02
│ └── data
├── 172-registry-claim-pvc-7b7d9bb6-a636-4f78-b2fe-924473cb47ab
├── 172-registry-slave-claim-registry-slave-0-pvc-34e62524-fca0-48dd-ba29-b4cf178ca028
├── 172-rtspdata-claim-pvc-211a1aac-409f-431c-b78d-5b87b9017625
├── 173-postgres-claim-pvc-b901449a-0ce7-4ecf-8dfc-e6371dd3a9b4
│ └── data
├── 173-registry-claim-pvc-cd842cde-a3f7-4d54-94d6-c018e42ec495
├── 173-rtspdata-claim-pvc-a95c5748-ebed-4045-98b2-a04e534e0cf6
├── archived-161-postgres-claim-pvc-01cc1ff2-8cc8-4161-8d85-00cb6562e10e
│ └── data
├── archived-161-registry-claim-pvc-9b626e01-a565-4214-b94e-b7ba1e206a5e
├── archived-161-rtspdata-claim-pvc-b079c7e2-248e-4245-b243-5ff7dc3afa82
├── archived-162-postgres-claim-pvc-188af7ca-106d-4f2f-8905-9d7b391e9dce
│ └── data
├── archived-162-postgres-claim-pvc-356e4632-19e2-4ac9-8400-e00d39621b7c
│ └── data
├── archived-162-postgres-claim-pvc-45372032-979f-4ced-be35-15ec67a322b7
│ └── data
├── archived-162-postgres-claim-pvc-6d5e1f01-ad5b-45cc-9eef-654275e3ecd2
│ └── data
├── archived-162-postgres-claim-pvc-cbf4d4ca-b9d1-4d1c-88be-621eeb3680fb
│ └── data
├── archived-162-postgres-claim-pvc-eaa32a4c-9768-469a-ad85-1e1b682c376d
│ └── data
├── archived-162-postgres-claim-pvc-f517586b-e132-4a38-8ec9-18f6d5ca000e
│ └── data
├── archived-162-registry-claim-pvc-1796642a-d639-4ede-8204-1779c029aa4e
│ └── rethinkdb_data
I have reproduced this scenario in my test environment and I could find my data normally. To reproduce it I've followed these steps.
Make sure that you follow every steps. Editing this file needs root access; therefore you will need to use sudo with your command. You can also open the file in any of your personal favorite text editors.
1 - Installed and configured my NFS Server on my Master Node (Debian Linux, this might change depending on your Linux distribution):
Before installing the NFS Kernel server, we need to update our system’s repository index:
$ sudo apt-get update
Now, run the following command in order to install the NFS Kernel Server on your system:
$ sudo apt install nfs-kernel-server
Create the Export Directory
$ sudo mkdir -p /mnt/nfs_server_files
As we want all clients to access the directory, we will remove restrictive permissions of the export folder through the following commands (this may vary on your set-up according to your security policy):
$ sudo chown nobody:nogroup /mnt/nfs_server_files
$ sudo chmod 777 /mnt/nfs_server_files
Assign server access to client(s) through NFS export file
$ sudo nano /etc/exports
Inside this file, add a new line to allow access from other servers to your share.
/mnt/nfs_server_files 10.128.0.0/24(rw,sync,no_subtree_check)
You may want to use different options in your share. 10.128.0.0/24 is my k8s internal network.
Export the shared directory and restart the service to make sure all configuration files are correct.
$ sudo exportfs -a
$ sudo systemctl restart nfs-kernel-server
Check all active shares:
$ sudo exportfs
/mnt/nfs_server_files
10.128.0.0/24
2 - Install NFS Client on all my Worker Nodes:
$ sudo apt-get update
$ sudo apt-get install nfs-common
At this point you can make a test to check if you have access to your share from your worker nodes:
$ sudo mkdir -p /mnt/sharedfolder_client
$ sudo mount kubemaster:/mnt/nfs_server_files /mnt/sharedfolder_client
Notice that at this point you can use the name of your master node. K8s is taking care of the DNS here. Check if the volume mounted as expected and create some folders and files to male sure everything is working fine.
$ cd /mnt/sharedfolder_client
$ mkdir test
$ touch file
Go back to your master node and check if these files are at /mnt/nfs_server_files folder.
3 - Install NFS Client Provisioner.
Install the provisioner using helm:
$ helm install --name ext --namespace nfs --set nfs.server=kubemaster --set nfs.path=/mnt/nfs_server_files stable/nfs-client-provisioner
Notice that I've specified a namespace for it. Check if they are running:
$ kubectl get pods -n nfs
NAME READY STATUS RESTARTS AGE
ext-nfs-client-provisioner-f8964b44c-2876n 1/1 Running 0 84s
At this point we have a storageclass called nfs-client:
$ kubectl get storageclass -n nfs
NAME PROVISIONER AGE
nfs-client cluster.local/ext-nfs-client-provisioner 5m30s
We need to create a PersistentVolumeClaim:
$ more nfs-client-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: nfs
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "nfs-client"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
$ kubectl apply -f nfs-client-pvc.yaml
Check the status (Bound is expected):
$ kubectl get persistentvolumeclaim/test-claim -n nfs
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-e1cd4c78-7c7c-4280-b1e0-41c0473652d5 1Mi RWX nfs-client 24s
4 - Create a simple pod to test if we can read/write out NFS Share:
Create a pod using this yaml:
apiVersion: v1
kind: Pod
metadata:
name: pod0
labels:
env: test
namespace: nfs
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
$ kubectl apply -f pod.yaml
Now, let's dig inside this pod:
$ kubectl exec -ti -n nfs pod0 -- bash
Let's list all mounted volumes on our pod:
root@pod0:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 9.8G 6.1G 3.3G 66% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup
kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12 9.8G 5.8G 3.6G 62% /mnt
/dev/sda1 9.8G 6.1G 3.3G 66% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 7.4G 12K 7.4G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 7.4G 0 7.4G 0% /proc/acpi
tmpfs 7.4G 0 7.4G 0% /sys/firmware
As we can see, we have a NFS volume mounted on /mnt. (Important to notice the path kubemaster:/mnt/nfs_server_files/nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12
)
Let's check it:
root@pod0:/# cd /mnt
root@pod0:/mnt# ls -la
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:33 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
It's empty. Let's create some files:
$ for i in 1 2 4 5 6; do touch file$i; done;
$ ls -l
total 8
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 08:58 .
drwxr-xr-x 1 root root 4096 Nov 5 08:38 ..
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file2
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file4
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file5
-rw-r--r-- 1 nobody nogroup 0 Nov 5 08:58 file6
Now let's where are these files on our NFS Server (Master Node):
$ cd /mnt/nfs_server_files
$ ls -l
total 4
drwxrwxrwx 2 nobody nogroup 4096 Nov 5 09:11 nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12
$ cd nfs-test-claim-pvc-4550f9f0-694d-46c9-9e4c-7172a3a64b12/
$ ls -l
total 0
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file1
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file2
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file4
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file5
-rw-r--r-- 1 nobody nogroup 0 Nov 5 09:11 file6
And here are the files we just created inside our pod!