2023-02-19
md
NFS Share Over a WireGuard Tunnel
<-Installing WireGuard on openmediavault 6.0.24 (August 2022)
<-Installing WireGuard on OpenMediaVault 5.6.1 (March 2021)

As stated in the original (Feb. 23, 2022) version of this short post, this is an arcane subject. It's about accessing shared files on a remote system through a WireGuard VPN tunnel when the underlying file sharing protocol is NFS (Network File System). While the latter was "developed to allow file sharing between systems residing on a local area network" (source), I thought it would be possible to mount the shared directory on the remote system in exactly the same fashion as done with the local NAS. As one would guess given this post, this may not be as straight forward as hoped.

I was already using NFS to access a NAS (Network Attached Storage) on my home network. It's a small system running OpenMediaVault version 6.0.38-1 (Shaitan). There is very little change to that system from the proposed defaults when installing OMV. The remote site, running OpenMediaVault version 6.0.39-1 (Shaitan), is almost a perfect replica of the local NAS except for the different IP subnet. Here is an overview of the pertinent parts of the networks.

Virtual private network

WireGuard is always running on the remote NAS, so let's set up the VPN from the local desktop and then create a mount point for the remote directory.

michel@hp:~$ sudo wg-quick up romv [#] IP link add romv type wireguard [#] wg setconf romv /dev/fd/63 [#] ip -4 address add 192.168.98.4/24 dev romv [#] ip link set mtu 1420 up dev romv [#] ip -4 route add 192.168.168.0/24 dev romv michel@hp:~$ sudo mkdir -p /media/michel/romv_nas

At this point I can ping the remote NAS, log into the OMV web interface (at http://romv.local) and open SSH sessions in precisely the same fashion as with the local NAS.

michel@hp:~$ ping -c 3 romv.local PING romv.local (192.168.168.33) 56(84) bytes of data. 64 octets de romv.local (192.168.168.33) : icmp_seq=1 ttl=64 temps=32.3 ms 64 octets de romv.local (192.168.168.33) : icmp_seq=2 ttl=64 temps=32.9 ms 64 octets de romv.local (192.168.168.33) : icmp_seq=3 ttl=64 temps=42.1 ms --- statistiques ping romv.local --- 3 paquets transmis, 3 reçus, 0 % paquets perdus, temps 2004 ms rtt min/moy/max/mdev = 32,264/35,739/42,083/4,492 ms michel@hp:~$ ssh romv.local Linux romv 5.16.0-0.bpo.4-amd64 #1 SMP PREEMPT Debian 5.16.12-1~bpo11+1 (2022-03-08) x86_64 ... Last login: Fri Feb 17 15:39:00 2023 from 192.168.98.4

 

I should mention that while the mDNS stack avahi is running on the remote system, its broadcasts are not reaching the local network. I may look into this later, but it's not a real problem right now given that fixed IP addresses are given to the NAS. A simple addition in the hosts configuration file of the desktop "enables" the romv.local on it.
michel@hp:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 hp 192.168.168.33 romv.local ...

Thanks to the WireGuard tunnel it looked as if there was no difference between the vault NAS on the local network and the romv NAS on the remote system. Consequently, I thought it should be possible to mount the shared directory at /export/_michel on romv in the exact same way that /export/nas_michel on vault is mounted on the desktop. That did not work.

Trying to mount the NFS V4 pseudo-file system: michel@hp:~$ sudo mount -v 192.168.168.33:/ /media/michel/romv_nas mount.nfs: timeout set for Fri Feb 17 15:19:21 2023 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.168.33,clientaddr=192.168.98.4' mount.nfs: mount(2): Operation not permitted mount.nfs: trying text-based options 'addr=192.168.168.33' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.168.33 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.168.33 prog 100005 vers 3 prot UDP port 33382 mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 192.168.168.33:/ Same failed result when trying to mount the actual shared directory using NFS V4 path naming convention: michel@hp:~$ sudo mount -v 192.168.168.33:/_michel /media/michel/romv_nas mount.nfs: timeout set for Fri Feb 17 15:28:07 2023 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.168.33,clientaddr=192.168.98.4' mount.nfs: mount(2): Operation not permitted mount.nfs: trying text-based options 'addr=192.168.168.33' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.168.33 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.168.33 prog 100005 vers 3 prot UDP port 33382 mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 192.168.168.33:/_michel Same with failed result using pre V4 shared path naming convention: michel@hp:~$ sudo mount -v 192.168.168.33:/export/_michel /media/michel/romv_nas mount.nfs: timeout set for Fri Feb 17 15:28:34 2023 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.168.33,clientaddr=192.168.98.4' mount.nfs: mount(2): Operation not permitted mount.nfs: trying text-based options 'addr=192.168.168.33' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.168.33 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.168.33 prog 100005 vers 3 prot UDP port 33382 mount.nfs: mount(2): Permission denied mount.nfs: access denied by server while mounting 192.168.168.33:/export/_michel

That stymied me and unfortunately, I misread what the showmount command was saying.

michel@hp:~$ showmount -e 192.168.168.33 Export list for 192.168.168.33: /export 192.168.168.0/24 /export/_michel 192.168.168.0/24

With hindsight, I now realise that the command was showing that the remote machine, romv at 192.168.168.33, was exporting the NFS share, but only to clients on the 192.168.168.0/24 subnet. The WireGuard VPN gives the HP desktop machine access to that subnet but it does not assign an address to the desktop in that remote subnet. Looking at the SSH login, it's obvious that as far as romv is concerned, the IP address of hp is 192.168.98.4 which is the address of the machine on the WireGuard virtual network. Looking back at the output of the mount commands with the verbose flag set (-v) it is, again, obvious that the request to mount the shared directory is coming from 192.168.98.4 which is a blocked address. As I said in the original note, it was too bad that I had not thought to use the verbose flag.

The quick solution to the problem is to change the client IP address of the NFS shared directories. This is usually done by editing the /etc/exports configuration file, but that file should not be edited by hand in OpenMediaVault.

michel@romv:~$ cat /etc/exports # This file is auto-generated by openmediavault (https://www.openmediavault.org) # WARNING: Do not edit this file, your changes will get lost.

Instead, the change must be made with the OMV Web interface as shown in the figure below.

nfs shares settings

The default client IP address was 192.168.168.0/24 on this installation of OMV. I had to click on the shared folder and then click on the pencil icon to edit the entry and change it to 192.168.98.0/24 as shown. Changing the client IP address to the WireGuard virtual subnet ensured that requests from the desktop at 192.168.98.4 would be accepted. Don't forget to Save and then apply the changes for them to take effect. At the same time, mount options could be changed if desired. I did not modify the options set by default (insecure, rw, subtree_check). To find out what these options are and what the other options are, refer to the exports(5) manual page.

Make sure to remove any *.exports file added in the /etc/exports.d directory if older instructions were followed. If that's done after applying the changes in the OMV Web interface then the export table must be updated.
michel@romv:~$ sudo exportfs -vra exporting 192.168.98.0/24:/export exporting 192.168.98.0/24:/export/_michel michel@romv:~$ sudo exportfs -v /export/_michel 192.168.98.0/24(rw,wdelay,insecure,root_squash,fsid=f8b5b1b3-28dc-44ec-8235-33d6fcaa87cf,sec=sys,rw,insecure,root_squash,no_all_squash) /export 192.168.98.0/24(ro,wdelay,root_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,no_all_squash)

That's it, now the exported directories can be viewed from the client system and clearly they are exported to the 192.168.98.0/24 subnet.

michel@hp:~$ showmount -e 192.168.168.33 Export list for 192.168.168.33: /export 192.168.98.0/24 /export/_michel 192.168.98.0/24

It is now a simple matter to mount the remote directory into the hp file system.

michel@hp:~$ cd /media/michel michel@hp:/media/michel$ mkdir romv_nas michel@hp:/media/michel$ sudo mount -v 192.168.168.33:/ romv_nas mount.nfs: timeout set for Wed Feb 15 20:21:44 2023 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.168.33,clientaddr=192.168.98.4' michel@hp:/media/michel$ ls -l total 4 drwxr-xr-x 2 root root 4096 fév 19 14:17 romv_nas michel@hp:/media/michel$ ls -l romv_nas total 4 drwxrwsr-x 4 root users 4096 fév 15 21:51 _michel michel@hp:/media/michel$ groups michel adm tty uucp dialout cdrom sudo dip plugdev users lpadmin

While the mount point romv_nas belongs entirely to root, the shared directory romv_nas/_michel is accessible to any member of the users group to which most users on the desktop already belong. Consequently, it is possible to create, read, modify, and delete files on the remove directory just as on a local directory.

michel@hp:/media/michel$ cd romv_nas/_michel michel@hp:/media/michel/romv_nas/_michel$ echo "hello" > test.txt michel@hp:/media/michel/romv_nas/_michel$ cat test.txt hello michel@hp:/media/michel/romv_nas/_michel$ ls test.txt ls: cannot access 'test.txt': No such file or directory

One of the advantages of mounting the remote file system in /media/michel is that it will automatically appear among the connected Devices (Appareils in French) in the file explorer. At least, that is what happens in Caja on my desktop running Linux Mint Mate.

nfs4 share in Caja

In my case, mounting the NFS V4 pseudo-file system is not particularly useful since it contains a single directory. It is simpler to mount the latter directly as was done in older version of NFS. That will be done next, after unmounting the virtual file system.

michel@hp:/media/michel/romv_nas/_michel$ cd ../.. michel@hp:/media/michel$ sudo umount romv_nas michel@hp:/media/michel$ sudo mount -v 192.168.168.33:/_michel romv_nas mount.nfs: timeout set for Sun Feb 19 15:35:35 2023 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.168.33,clientaddr=192.168.98.4' michel@hp:/media/michel$ ls romv_nas Laz_Projects props.txt Versions

nfs4 share in Caja

No matter how the remote directory is mounted, a warning is in order. It is important to unmount all shared NFS directories before closing the WireGuard tunnel. If the VPN is closed with a mounted NFS share in place, some functions such ls and df -h will hang because the system will attempt to connect to the missing share for a very long time.





What if someone has a computer on the remote network and wants access to the small NAS at 192.168.168.33? With the change to the client IP in OMV to 192.168.98.0/24, a client on 192.168.168.0/24 would not have access. In other words, was there a proper way to do what I wanted to do in the original version of this post? The short answer is yes. To see how to proceed, first restore the default client IP address in the OpenMediaVault Web interface on romv.

nfs shares settings

The default client IP address was 192.168.168.0/24 on this installation of OMV. Again, remember to apply the changes for them to take effect. Now we are back to the default situation.

michel@hp:~$ showmount -e 192.168.168.33 Export list for 192.168.168.33: /export 192.168.168.0/24 /export/_michel 192.168.168.0/24

Machines on the 192.168.98.0/24 subnet must now be added to the list of clients allowed to mount the remote file system. As said before, the /etc/exports file on romv cannot be modified as it will be overridden by OpenMediaVault. Instead an extra export table called tunnel.exports or anything else ending in .exports is created in the /etc/exports.d directory. Any text editor could be used to add the following content.

/export 192.168.98.0/24(rw,fsid=0,subtree_check,insecure) /export/_michel 192.168.98.0/24(rw,subtree_check,insecure)

Since root is the owner and group of the /etc/exports.d directory, sudo will be necessary.

michel@romv:~$ cd /etc/exports.d michel@romv:/etc/exports.d$ sudo nano tunnel.exports

Don't forget to update the exports.

michel@romv:/etc/exports.d$ sudo exportfs -vra exporting 192.168.10.0/24:/export exporting 192.168.98.0/24:/export exporting 192.168.10.0/24:/export/_michel exporting 192.168.98.0/24:/export/_michel

We can now check that the shared directory is available to all clients on two subnets, 192.168.168.0/24 and 192.168.98.0/24.

michel@hp:~$ showmount -e 192.168.10.33 Export list for 192.168.10.33: /export 192.168.98.0/24,192.168.10.0/24 /export/_michel 192.168.98.0/24,192.168.10.0/24

Make sure that any file system mounted on /media/michel on the desktop is unmounted and proceed to mount the NFS V4 virtual file system. The trick here is to mount the file system using the WireGuard IP address of the client.

michel@hp:/media/michel$ umount romv_nas michel@hp:/media/michel$ sudo mount -v -t nfs 192.168.98.1:/ romv_nas mount.nfs: timeout set for Sun Feb 19 20:05:35 2023 michel@hp:/media/michel$ ls romv_nas _michel michel@hp:/media/michel$ ls romv_nas/_michel Laz_Projects props.txt Versions

Of course it is also possible to mount the shared directory directly instead of the virtual NFS V4 file system.

michel@hp:/media/michel$ sudo mount -v 192.168.98.1:/_michel romv_nas mount.nfs: timeout set for Sun Feb 19 21:32:15 2023 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.98.1,clientaddr=192.168.98.4' michel@hp:/media/michel$ ls romv_nas Laz_Projects props.txt Versions

Note how it is not necessary to restrict the file system type with the -t nfs parameter. By default, NFS version 4 will be used as long as both the remote system and the desktop are running relatively recent versions of Linux. In this example, the remote system is running OpenMediaVault on a 5.16.0 Linux kernel and the desktop machine is a Linux Mint Mate system on an older Linux kernel 5.4.0.

My apologies for the, let's say, less than accurate advice in the first version of this post. As shown above, it is easy to access NFS shares over a WireGuard tunnel just as long as one keeps track of the IP subnet on which the WireGuard interface to the remote site is found.

The usual disclaimer is in order here: I do not claim to be an expert in the field, far from it. So any corrections or suggestions are welcome. An message can be sent by clicking on the e-mail address at the bottom of the page.

<-Installing WireGuard on openmediavault 6.0.24 (August 2022)
<-Installing WireGuard on OpenMediaVault 5.6.1 (March 2021)