Storage Options for VMs, CTs and Client Computers

Davidoff

Well-Known Member
Nov 19, 2017
66
2
48
I will first apologize for what are likely some dumb questions about storage for virtual machines and containers in Proxmox. I'm very much a newbie to Proxmox and somewhat a newbie when it comes to virtualization generally. That being said, I have spent quite a bit of time reading through the forums and googling to try to find the right answers before posting, but still haven't quite managed to figure things out, hence this post.

By way of background, following are some details on my current setup:
  • Single node with Proxmox installed on two SSDs mirrored using ZFS
  • A total of ten HDs that were partitioned and ext4 filesystems created at the host level.
  • Snapraid installed at the host level and assigned 8 of the 10 HDs for content and 2 for parity
  • MergerFS installed at the host level to pool the 8 content drives together and make them available at a specific mount point (/mnt/datapool)
My plan was to set up a number of containers and virtual machines on that same node which would then read and write data to and from /mnt/datapool (or subdirectories thereof). One of the VMs will be a Windows Server.
Some of these directories would also need to be accessed (read and write) by other computers and devices on the network. This would include both Windows and Linux machines. Essentially file server functionality. The pool would not be used to store either images or root directories for containers - those would remain on the SSDs. However, I was planning to use the pool to store ISOs and backups for Proxmox.

The objective in setting up the HDs the way I did was to simplify storage management (i.e. avoid the need to monitor individual drive capacity, as MergerFS allocates files across the 8 drives automatically) and to provide some degree of fault tolerance (through the use of Snapraid - if a drive fails, then Snapraid can be used to recover the data on that drive).

Another objective was to keep things stored using file system storage at the host level. In other words, if a VM writes a file to /mnt/datapool/, it can be read and written at the host level as if it had been created by the host. The reason for doing this was so that if, for any reason, the node died, the data on the drives could easily be recovered by pulling the drives and hooking them up to another computer.

The part that I'm now struggling with is exactly how to make /mnt/datapool available to containers, virtual machines and other computers and devices. My first thought was just to install NFS and Samba at the host level and use either of those across the board to give access to containers, virtual machines and other computers and devices, but then I read in other threads that doing so is not recommended, and that NFS and/or Samba should instead be installed as guests on Proxmox and not on the host. So presumably, for example, an LXC container should be created and NFS/Samba installed and with /mnt/datapool/ bind-mounted to the container. But then I came across other threads that mentioned there were issues with running NFS in a container - specifically that it would not work unless a special AppArmor profile is created. Plus there seemed to be some daunting complexities with ACLs and mapping groups and users when it came to using bind mounts.

I guess in other words what I'm hoping to find is some conceptual advice on how best to enable access to /mnt/datapool/ having regard to the objectives above and how I plan to use it. Any thoughts or suggestions (or for that matter even alternate approaches to how I've set things up already) would be most appreciated.

Also, some specific questions that are somewhat related to the more general question:
  • Would it be correct to say that, for containers, provisioning them with access to /mnt/datapool would best be done by using a bind mount in each container, rather than, say, setting up NFS (either on the host or in a container) and then having each container accessing through NFS?
  • My understanding is that bind mounts are only available for containers, so would that mean that NFS would be needed to make /mnt/datapool/ available to Linux virtual machines?
  • Along similar lines, for the Windows Server virtual machine, would the best way to make /mnt/datapool available to that VM be by way of Samba, either at the host level or in a container?
Any and all suggestions, thoughts, comments, critiques and/or observations would be most welcome and most appreciated.
 
Hello Davidoff!

I know that this post is old and you probably have realized your concept. I'm thinking also about having Snapraid+mergerfs in use for my movie-database. My idea was to pass-through all disks to a VM and set up snapraid+mergerfs there (in case of a crash, all configs can be restored)... To mount the disks (/mnt/datapool) I thought samba + nfs would be the way to get it work.

How have you setup your disks?
What would you change if you could start from scratch?
 
Hey floh. Since I didn't really get much in the way of suggestions, I more or less set up what I had described above. I gave some thought to implementing Snapraid+mergerfs in a VM with disk pass-through but decided against it. Instead, I installed Snapraid and mergerfs at the host level. Containers are given access by way of bind mounts. I wasn't able to use the approach recommended by Proxmox to grant rights to users in containers so ended up using setfacl - see https://forum.proxmox.com/threads/w...-an-unprivileged-container.38898/#post-214852 for more details on that.

In order to access from the Windows VM and other Windows clients on the network, I set up another container and created bind mounts in that container. Those bind mounts are in turn shared through Samba which is installed in the container.

Not sure if it's optimal, but it seems to work reasonably well having regard to my objectives, so I likely would not change anything given my current equipment. All that being said, if I were to start back from scratch including equipment, I likely would have chosen a different set of equipment altogether - rather than one large box with many drives, I would have purchased multiple smaller nodes (at least 3) and set up redundancy at the machine level rather than at the disk level - perhaps using something like Ceph instead.

Almost forgot - one problem I've encountered with my set up is I/O delays - writes to the drive array often get bogged down and things slow to a crawl. However, I think this is due primarily to physical architecture I use rather than how I set things up - everything goes through a single host bus adapter with SAS expander backplanes. I should have chosen a system that uses multiple HBAs with straight connections to the backplanes, as I think that's where things get bogged down.

Hope this is of some help.
 
Hello,

Sorry to dig out this thread but I basically have the exact same questions.

@Davidoff, you mentionned you decided against having mergerfs + snapraid, could you please explain how you reached this decision?

I'm on the verge of deciding the contrary, because to me it seems simpler to isolate this from the host, manage everything related to the filesystems in a "dedicated" container and use nfs to expose the pools to the other container/VMs.

Your experience would be welcome!
Thanks!
 
@Etique57 - No, that's not the case. I am currently using mergerfs + snapraid. See my response to floh above. Or did you mean putting mergerfs + snapraid in a container? I decided against that as it would have been a bit more complex to set up the disk passthroughs. Don't disagree with your reasons, but what can I say, I'm lazy.
 
@Davidoff sorry, I indeed meant "... in a container"

Thanks anyway for answering so quickly, I'll go ahead then (to me the permissions look more complex than the passthrough :D)
 
  • Like
Reactions: Davidoff
@Etique57 - Happy to do so. In retrospect, you're probably right. It's just that once I set down the path with using acl permissions I thought I might as well finish it. Never did look much further into passthroughs. When you do get it set up, it would be great if you could post a quick note about your experiences (and how simple or complex it turned out to be) - I for one would be very interested.
 
@Davidoff - sure, will do.

I actually did already passthrough all my drives to an OpenMediaVault VM, but I'm not happy with the way it address drives and mount points and wanting to move back to a more familiar debian-approach.

So currently I'm trying to make up my mind on wether I should use Turnkey Linux or not. I very much like the concep though I think there's some clutter... any way I'll give it a try and will post back.
 
  • Like
Reactions: Davidoff

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!