As stated in the original (Feb. 23, 2022) version of this short post, this is an arcane subject. It's about accessing shared files on a remote system through a WireGuard VPN tunnel when the underlying file sharing protocol is NFS (Network File System). While the latter was "developed to allow file sharing between systems residing on a local area network" (source), I thought it would be possible to mount the shared directory on the remote system in exactly the same fashion as done with the local NAS. As one would guess given this post, this may not be as straight forward as hoped.
I was already using NFS to access a NAS (Network Attached Storage) on my home network. It's a small system running OpenMediaVault version 6.0.38-1 (Shaitan). There is very little change to that system from the proposed defaults when installing OMV. The remote site, running OpenMediaVault version 6.0.39-1 (Shaitan), is almost a perfect replica of the local NAS except for the different IP subnet. Here is an overview of the pertinent parts of the networks.
WireGuard is always running on the remote NAS, so let's set up the VPN from the local desktop and then create a mount point for the remote directory.
At this point I can ping the remote NAS, log into the OMV web interface (at http://romv.local
) and open SSH sessions in precisely the same fashion as with the local NAS.
avahi
is running on the remote system, its broadcasts are not reaching the local network. I may look into this later, but it's not a real problem right now given that fixed IP addresses are given to the NAS. A simple addition in the hosts
configuration file of the desktop "enables" the romv.local
on it.
Thanks to the WireGuard tunnel it looked as if there was no difference between the vault
NAS on the local network and the romv
NAS on the remote system. Consequently, I thought it should be possible to mount the shared directory at /export/_michel
on romv
in the exact same way that /export/nas_michel
on vault
is mounted on the desktop. That did not work.
That stymied me and unfortunately, I misread what the showmount
command was saying.
With hindsight, I now realise that the command was showing that the remote machine, romv
at 192.168.168.33
, was exporting the NFS share, but only to clients on the 192.168.168.0/24
subnet. The WireGuard VPN gives the HP desktop machine access to that subnet but it does not assign an address to the desktop in that remote subnet. Looking at the SSH login, it's obvious that as far as romv
is concerned, the IP address of hp
is 192.168.98.4
which is the address of the machine on the WireGuard virtual network. Looking back at the output of the mount commands with the verbose flag set (-v
) it is, again, obvious that the request to mount the shared directory is coming from 192.168.98.4
which is a blocked address. As I said in the original note, it was too bad that I had not thought to use the verbose flag.
The quick solution to the problem is to change the client IP address of the NFS shared directories. This is usually done by editing the /etc/exports
configuration file, but that file should not be edited by hand in OpenMediaVault.
Instead, the change must be made with the OMV Web interface as shown in the figure below.
The default client IP address was 192.168.168.0/24
on this installation of OMV. I had to click on the shared folder and then click on the pencil icon to edit the entry and change it to 192.168.98.0/24
as shown. Changing the client
IP address to the WireGuard virtual subnet ensured that requests from the desktop at 192.168.98.4
would be accepted. Don't forget to and then apply the changes for them to take effect. At the same time, mount options could be changed if desired. I did not modify the options set by default (insecure, rw, subtree_check). To find out what these options are and what the other options are, refer to the exports
(5) manual page.
*.exports
file added in the /etc/exports.d
directory if older instructions were followed. If that's done after applying the changes in the OMV Web interface then the export table must be updated.
That's it, now the exported directories can be viewed from the client system and clearly they are exported to the 192.168.98.0/24
subnet.
It is now a simple matter to mount the remote directory into the hp
file system.
While the mount point romv_nas
belongs entirely to root
, the shared directory romv_nas/_michel
is accessible to any member of the users
group to which most users on the desktop already belong. Consequently, it is possible to create, read, modify, and delete files on the remove directory just as on a local directory.
One of the advantages of mounting the remote file system in /media/michel
is that it will automatically appear among the connected Devices (Appareils in French) in the file explorer. At least, that is what happens in Caja on my desktop running Linux Mint Mate.
In my case, mounting the NFS V4 pseudo-file system is not particularly useful since it contains a single directory. It is simpler to mount the latter directly as was done in older version of NFS. That will be done next, after unmounting the virtual file system.
No matter how the remote directory is mounted, a warning is in order. It is important to unmount all shared NFS directories before closing the WireGuard tunnel. If the VPN is closed with a mounted NFS share in place, some functions such ls
and df -h
will hang because the system will attempt to connect to the missing share for a very long time.
What if someone has a computer on the remote network and wants access to the small NAS at 192.168.168.33
? With the change to the client IP in OMV to 192.168.98.0/24
, a client on 192.168.168.0/24
would not have access. In other words, was there a proper way to do what I wanted to do in the original version of this post? The short answer is yes. To see how to proceed, first restore the default client IP address in the OpenMediaVault Web interface on romv
.
The default client IP address was 192.168.168.0/24
on this installation of OMV. Again, remember to apply the changes for them to take effect. Now we are back to the default situation.
Machines on the 192.168.98.0/24
subnet must now be added to the list of clients allowed to mount the remote file system. As said before, the /etc/exports
file on romv
cannot be modified as it will be overridden by OpenMediaVault. Instead an extra export table called tunnel.exports
or anything else ending in .exports
is created in the /etc/exports.d
directory. Any text editor could be used to add the following content.
Since root
is the owner and group of the /etc/exports.d
directory, sudo
will be necessary.
Don't forget to update the exports.
We can now check that the shared directory is available to all clients on two subnets, 192.168.168.0/24
and 192.168.98.0/24
.
Make sure that any file system mounted on /media/michel
on the desktop is unmounted and proceed to mount the NFS V4 virtual file system. The trick here is to mount the file system using the WireGuard IP address of the client.
Of course it is also possible to mount the shared directory directly instead of the virtual NFS V4 file system.
Note how it is not necessary to restrict the file system type with the -t nfs
parameter. By default, NFS version 4 will be used as long as both the remote system and the desktop are running relatively recent versions of Linux. In this example, the remote system is running OpenMediaVault on a 5.16.0 Linux kernel and the desktop machine is a Linux Mint Mate system on an older Linux kernel 5.4.0.
My apologies for the, let's say, less than accurate advice in the first version of this post. As shown above, it is easy to access NFS shares over a WireGuard tunnel just as long as one keeps track of the IP subnet on which the WireGuard interface to the remote site is found.
The usual disclaimer is in order here: I do not claim to be an expert in the field, far from it. So any corrections or suggestions are welcome. An message can be sent by clicking on the e-mail address at the bottom of the page.