I will first apologize for what are likely some dumb questions about storage for virtual machines and containers in Proxmox. I'm very much a newbie to Proxmox and somewhat a newbie when it comes to virtualization generally. That being said, I have spent quite a bit of time reading through the forums and googling to try to find the right answers before posting, but still haven't quite managed to figure things out, hence this post.
By way of background, following are some details on my current setup:
Some of these directories would also need to be accessed (read and write) by other computers and devices on the network. This would include both Windows and Linux machines. Essentially file server functionality. The pool would not be used to store either images or root directories for containers - those would remain on the SSDs. However, I was planning to use the pool to store ISOs and backups for Proxmox.
The objective in setting up the HDs the way I did was to simplify storage management (i.e. avoid the need to monitor individual drive capacity, as MergerFS allocates files across the 8 drives automatically) and to provide some degree of fault tolerance (through the use of Snapraid - if a drive fails, then Snapraid can be used to recover the data on that drive).
Another objective was to keep things stored using file system storage at the host level. In other words, if a VM writes a file to /mnt/datapool/, it can be read and written at the host level as if it had been created by the host. The reason for doing this was so that if, for any reason, the node died, the data on the drives could easily be recovered by pulling the drives and hooking them up to another computer.
The part that I'm now struggling with is exactly how to make /mnt/datapool available to containers, virtual machines and other computers and devices. My first thought was just to install NFS and Samba at the host level and use either of those across the board to give access to containers, virtual machines and other computers and devices, but then I read in other threads that doing so is not recommended, and that NFS and/or Samba should instead be installed as guests on Proxmox and not on the host. So presumably, for example, an LXC container should be created and NFS/Samba installed and with /mnt/datapool/ bind-mounted to the container. But then I came across other threads that mentioned there were issues with running NFS in a container - specifically that it would not work unless a special AppArmor profile is created. Plus there seemed to be some daunting complexities with ACLs and mapping groups and users when it came to using bind mounts.
I guess in other words what I'm hoping to find is some conceptual advice on how best to enable access to /mnt/datapool/ having regard to the objectives above and how I plan to use it. Any thoughts or suggestions (or for that matter even alternate approaches to how I've set things up already) would be most appreciated.
Also, some specific questions that are somewhat related to the more general question:
By way of background, following are some details on my current setup:
- Single node with Proxmox installed on two SSDs mirrored using ZFS
- A total of ten HDs that were partitioned and ext4 filesystems created at the host level.
- Snapraid installed at the host level and assigned 8 of the 10 HDs for content and 2 for parity
- MergerFS installed at the host level to pool the 8 content drives together and make them available at a specific mount point (/mnt/datapool)
Some of these directories would also need to be accessed (read and write) by other computers and devices on the network. This would include both Windows and Linux machines. Essentially file server functionality. The pool would not be used to store either images or root directories for containers - those would remain on the SSDs. However, I was planning to use the pool to store ISOs and backups for Proxmox.
The objective in setting up the HDs the way I did was to simplify storage management (i.e. avoid the need to monitor individual drive capacity, as MergerFS allocates files across the 8 drives automatically) and to provide some degree of fault tolerance (through the use of Snapraid - if a drive fails, then Snapraid can be used to recover the data on that drive).
Another objective was to keep things stored using file system storage at the host level. In other words, if a VM writes a file to /mnt/datapool/, it can be read and written at the host level as if it had been created by the host. The reason for doing this was so that if, for any reason, the node died, the data on the drives could easily be recovered by pulling the drives and hooking them up to another computer.
The part that I'm now struggling with is exactly how to make /mnt/datapool available to containers, virtual machines and other computers and devices. My first thought was just to install NFS and Samba at the host level and use either of those across the board to give access to containers, virtual machines and other computers and devices, but then I read in other threads that doing so is not recommended, and that NFS and/or Samba should instead be installed as guests on Proxmox and not on the host. So presumably, for example, an LXC container should be created and NFS/Samba installed and with /mnt/datapool/ bind-mounted to the container. But then I came across other threads that mentioned there were issues with running NFS in a container - specifically that it would not work unless a special AppArmor profile is created. Plus there seemed to be some daunting complexities with ACLs and mapping groups and users when it came to using bind mounts.
I guess in other words what I'm hoping to find is some conceptual advice on how best to enable access to /mnt/datapool/ having regard to the objectives above and how I plan to use it. Any thoughts or suggestions (or for that matter even alternate approaches to how I've set things up already) would be most appreciated.
Also, some specific questions that are somewhat related to the more general question:
- Would it be correct to say that, for containers, provisioning them with access to /mnt/datapool would best be done by using a bind mount in each container, rather than, say, setting up NFS (either on the host or in a container) and then having each container accessing through NFS?
- My understanding is that bind mounts are only available for containers, so would that mean that NFS would be needed to make /mnt/datapool/ available to Linux virtual machines?
- Along similar lines, for the Windows Server virtual machine, would the best way to make /mnt/datapool available to that VM be by way of Samba, either at the host level or in a container?