Best practice: Mount NAS (NFS) to LXC

monkee

New Member
Jul 30, 2023
6
1
3
Hello,

I wonder what the best practice is how to mount NAS to a LXC?
Why? Reason: I don't want to estimate and manage the size of the LXC data storages in a containers, etc. (e.g. jellyfin, nextcloud, etc.) + have all the advantages of user management etc. of a NAS system. I basically just want to manage all files/storages for several containers on one big NAS. This seems easier to me, but I am open to suggestions. Maybe I am overlooking something. (Or maybe I should ask more generally: How do you best store huge amounts of data which must be used inside LXCs?)​

My setup (all on one Proxmox server (PVE)):
  1. NAS: VM with openmediavault (omv).
    • qm set 101 -scsi1 /dev/disk/by-id/ata-WDC_.... to pass hard drive through to omv so omv can do its job (manage spin downs, power savings, user management, network access, etc.) (btw. the hard drive pass through setup is the reason why this is a VM. This is afaik not possible with LXCs.)
    • Open folders on OMV via NFS to the LAN.
  2. LXC Container which should use the NFS from omv as "big storage", e.g. nextcloud (nc):
    • I want to store the app data and DB of nc on my proxmox ssd directly. No issues here.
    • I want the nc data (i.e., user files like pictures, etc.) on the omv (via NFS).
How to do this the right way?


I encountered several challenges here/What I already achieved/tried:

I cannot mount NFSs in an unprivileged LXC. Big problem, especially for nextcloud which should be accessibly from the internet, if you recognize the security issues.​
  • Workaround: Mount the NFS to the PVE. Not possible, because OMV is a VM running on the PVE, so it obviously boots after PVE. Solution: Wait for OMV to boot, then mount the NFS to the PVE host (mount -t nfs 192.168.178.121:/data-storage-on-omv /mnt/data-storage/, then start the containers (which are configured with something like this: pct set 102 -mp0 /mnt/omv/data-storage,mp=/mnt/datastorage-on-omv,acl=1).
  • This is a dirty solution due to the manual steps for mounting (can be automated with a little bash script, still ugly solution).
  • This solution doesnt really work for some use cases. E.g. nextcloud requires the user data files to be owned by linux user "www-data". But if I write something from the nextcloud LXC, the NAS (omv) recognizes the file as from user "1000000", group "users". I assume that the issue is with proxmox, because user and group get preserved when I create files from the PVE console.

I don't know if this seems like an edge use case, but the urge to store files on a NAS which is virtualized on the same hardware as LXCs, and then access the NAS on these LXCs doesn't seem strange to me?!

Let me know your thoughts. Curious to learn more about proxmox.

Thanks! :)
 
I don't want to estimate and manage the size of the LXC data storages in a containers, etc. (e.g. jellyfin, nextcloud, etc.)
No problem without a NAS when using virtual disks on a thin-provisioned storage like LVM-thin or ZFS. And when working with bind-mounts you also don't need to care about fixed sizes.
+ have all the advantages of user management etc. of a NAS system.
How does that differ from the PAM management of the PVE node when working with bind-mounts? Except that you get a webUI for that?

Maybe I am overlooking something. (Or maybe I should ask more generally: How do you best store huge amounts of data which must be used inside LXCs?
Best way to store big amounts of data is to use a VM. At least if you care about backups, as LXCs won't allow you to do incremental backups. So there a NAS VM would be useful.

I cannot mount NFSs in an unprivileged LXC. Big problem, especially for nextcloud which should be accessibly from the internet, if you recognize the security issues.
  • Workaround: Mount the NFS to the PVE. Not possible, because OMV is a VM running on the PVE, so it obviously boots after PVE. Solution: Wait for OMV to boot, then mount the NFS to the PVE host (mount -t nfs 192.168.178.121:/data-storage-on-omv /mnt/data-storage/, then start the containers (which are configured with something like this: pct set 102 -mp0 /mnt/omv/data-storage,mp=/mnt/datastorage-on-omv,acl=1).
  • This is a dirty solution due to the manual steps for mounting (can be automated with a little bash script, still ugly solution).
But thats the only option if you don't want to use a less secure privileged LXC nor a way more secure VM. I wouldn't use a LXCs for services that are accessible from the internet if you care about security.
This solution doesnt really work for some use cases. E.g. nextcloud requires the user data files to be owned by linux user "www-data". But if I write something from the nextcloud LXC, the NAS (omv) recognizes the file as from user "1000000", group "users". I assume that the issue is with proxmox, because user and group get preserved when I create files from the PVE console.
You have to work with user remapping, so the owner of the NFS share is mapped to your www-data user inside the LXC: https://pve.proxmox.com/wiki/Unprivileged_LXC_containers
 
Last edited:
Thanks for the answer. This helps. :)

How would you approach this?
I try to avoid using virtual disks because they seem kind of inflexible to me? The reason for this is, that my hdd space is limited, so creating a disk for several application data (jellyfin: movies-disk, nextcloud: Pictures-disk, etc.) seems like to much overhead. Instead one big storage with a software raid seems like the best approach.

What I experimenting with at the moment:
  1. Create a VM with OMV, pass through all hdds to VM, create software raid, provide storage via nfs to network. Mount storage in VMs.
    • Disadvantage. Cant be mounted clean by unprivileged containers (workaround: I have to mount it back to the host, then mount bind it to the container, see my initial post.) This seems like my favorite approach so far. Happy to receive feedback! :)
  2. Create software raid on host with another application, share directory via proxmox to omv, provide this directory via nfs to other vms OR/AND mount bind directory to unprivileged containers.
    • Disadvantage: I have to touch the host system and create a raid on this level. Best practice seems to not to touch the host system in any case. Haven't tried this yet.
  3. Create a zfs tank with raid, mount bind the storage to the omv container (pct set 113 -mp0 /tank/,mp=/mnt/tank/), let omv manage the nfs OR/AND mount bind it to other containers who need storage.
    • (This seems not to work for me, in omv I get touch: cannot touch 'test': Permission denied in /mnt/tank.
  4. Other ideas are welcomed...
 
I try to avoid using virtual disks because they seem kind of inflexible to me? The reason for this is, that my hdd space is limited, so creating a disk for several application data (jellyfin: movies-disk, nextcloud: Pictures-disk, etc.) seems like to much overhead. Instead one big storage with a software raid seems like the best approach.
Do you know what thin-provisioning is? It's quite flexible when using a thin-provisioning capable storage like ZFS or LVM-thin. Lets say you got 1TB of storage and you want 4 virtual disks. You could create 4 virtual disks of 1TB each on top of that 1TB physical disk. The disks will then only consume the space they actually need to store your files and each of them can store up to 1TB. You just need to monitor your storage well, as all 4 virtual disks combined should never consume more than that physical 1 TB.
 
Last edited:
  • Like
Reactions: monkee

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!