There is no direct method to reach into a ZFS pool of the TrueNAS VM to Proxmox LXC containers. ZFS pools are completely isolated from the host because the disk controllers must pass through at the PCI interface to the TrueNAS VM. The disks attached to this controller are not visible from the Proxmox host.
Access to TrueNAS shares is the same as if the NAS was a separate machine. (similar to your Synology shares, however not sure how the fstab entries are interworking) There is a little twist at the end to support unprivelaged containers. This Proxmox procedure does not depend on the type of NFS server.
The setup is a three step process:
- Create a TrueNAS NFS share. (The NFS server) TrueNAS documentation assumes that the data set to share exists already.
- Mount an NFS client share on the Proxmox host.
- Create a bind mount point from the Proxmox host to the LXC container
Note that the above steps do not depend on TrueNAS as a VM within the Proxmox host. The TrueNAS VM has its own IP address which could be an external server. (I think that the same can be done with the Synology shares.) Access between the Proxmox host and the TrueNAS VM is via the Ethernet bridge. I use this method with my Proxmox to TrueNAS VM in my home lab. It works well.
For the second step I use the Proxmox GUI to create the NFS client storage location on the Proxmox host:
DataCenter->Storage->Add->NFS
Fill in the form with:
- ID (used as the name of the local pve storage directory)
- Nodes (I use my local lvm storage location)
- Server (use the IP address of the NFS server)
- Export (The shares advertised by the server are listed here)
- Leave the rest as default and click the "add" button at the lower right.
(errors with the add may need adjustment of the Maproot user name on the TrueNAS NFS share. Set it to root if needed.)
The NFS share is now mounted on the Proxmox host. This can be verified in the host shell:
ls -la /mnt/pve/ID_NAME
The ID_NAME is the name given in the ID field above.
The mount point can also be seen:
mount | grep ID_NAME
The third step is to create a bind mount point to the lxc container as described in
Proxmox docs.
From the host shell, the command will look something like:
pct set 100 -mp0 /mnt/pve/ID_NAME, mp=/media
Where:
- 100 is the target container ID
- /mnt/pve/ID_NAME is the directory in the host to bind to the LXC container
- mp=/media is the mount point in the LXC container.
The NFS share should now be visible from the container console: (linux container is assumed)
ls -la /media
I think that the bind mount point is the recommended method for NFS storage access to unprivileged containers.
I have not found a means to create a bind mount point from the GUI. Command line only. It would be a nice feature to add a Bind Mount Point option next to the Mount Point option in the container resources add pull down.
As an alternative, I've seen
posts where users simply create a privileged LXC container. Doing so allows installing the NFS client package within the container. Procedures are then similar to the Proxmox VM environment. This method follows documented practices of the target os in the container. I'm not sure how to quantify the security risk of privileged containers for Home Lab deployments especially if they are not exposed to the internet.