Storage Options for VMs, CTs and Client Computers

Discussion in 'Proxmox VE: Installation and configuration' started by Davidoff, Dec 7, 2017.

Tags:
  1. Davidoff

    Davidoff New Member

    Joined:
    Nov 19, 2017
    Messages:
    12
    Likes Received:
    0
    I will first apologize for what are likely some dumb questions about storage for virtual machines and containers in Proxmox. I'm very much a newbie to Proxmox and somewhat a newbie when it comes to virtualization generally. That being said, I have spent quite a bit of time reading through the forums and googling to try to find the right answers before posting, but still haven't quite managed to figure things out, hence this post.

    By way of background, following are some details on my current setup:
    • Single node with Proxmox installed on two SSDs mirrored using ZFS
    • A total of ten HDs that were partitioned and ext4 filesystems created at the host level.
    • Snapraid installed at the host level and assigned 8 of the 10 HDs for content and 2 for parity
    • MergerFS installed at the host level to pool the 8 content drives together and make them available at a specific mount point (/mnt/datapool)
    My plan was to set up a number of containers and virtual machines on that same node which would then read and write data to and from /mnt/datapool (or subdirectories thereof). One of the VMs will be a Windows Server.
    Some of these directories would also need to be accessed (read and write) by other computers and devices on the network. This would include both Windows and Linux machines. Essentially file server functionality. The pool would not be used to store either images or root directories for containers - those would remain on the SSDs. However, I was planning to use the pool to store ISOs and backups for Proxmox.

    The objective in setting up the HDs the way I did was to simplify storage management (i.e. avoid the need to monitor individual drive capacity, as MergerFS allocates files across the 8 drives automatically) and to provide some degree of fault tolerance (through the use of Snapraid - if a drive fails, then Snapraid can be used to recover the data on that drive).

    Another objective was to keep things stored using file system storage at the host level. In other words, if a VM writes a file to /mnt/datapool/, it can be read and written at the host level as if it had been created by the host. The reason for doing this was so that if, for any reason, the node died, the data on the drives could easily be recovered by pulling the drives and hooking them up to another computer.

    The part that I'm now struggling with is exactly how to make /mnt/datapool available to containers, virtual machines and other computers and devices. My first thought was just to install NFS and Samba at the host level and use either of those across the board to give access to containers, virtual machines and other computers and devices, but then I read in other threads that doing so is not recommended, and that NFS and/or Samba should instead be installed as guests on Proxmox and not on the host. So presumably, for example, an LXC container should be created and NFS/Samba installed and with /mnt/datapool/ bind-mounted to the container. But then I came across other threads that mentioned there were issues with running NFS in a container - specifically that it would not work unless a special AppArmor profile is created. Plus there seemed to be some daunting complexities with ACLs and mapping groups and users when it came to using bind mounts.

    I guess in other words what I'm hoping to find is some conceptual advice on how best to enable access to /mnt/datapool/ having regard to the objectives above and how I plan to use it. Any thoughts or suggestions (or for that matter even alternate approaches to how I've set things up already) would be most appreciated.

    Also, some specific questions that are somewhat related to the more general question:
    • Would it be correct to say that, for containers, provisioning them with access to /mnt/datapool would best be done by using a bind mount in each container, rather than, say, setting up NFS (either on the host or in a container) and then having each container accessing through NFS?
    • My understanding is that bind mounts are only available for containers, so would that mean that NFS would be needed to make /mnt/datapool/ available to Linux virtual machines?
    • Along similar lines, for the Windows Server virtual machine, would the best way to make /mnt/datapool available to that VM be by way of Samba, either at the host level or in a container?
    Any and all suggestions, thoughts, comments, critiques and/or observations would be most welcome and most appreciated.
     
  2. floh

    floh New Member

    Joined:
    Jul 19, 2018
    Messages:
    13
    Likes Received:
    0
    Hello Davidoff!

    I know that this post is old and you probably have realized your concept. I'm thinking also about having Snapraid+mergerfs in use for my movie-database. My idea was to pass-through all disks to a VM and set up snapraid+mergerfs there (in case of a crash, all configs can be restored)... To mount the disks (/mnt/datapool) I thought samba + nfs would be the way to get it work.

    How have you setup your disks?
    What would you change if you could start from scratch?
     
  3. Davidoff

    Davidoff New Member

    Joined:
    Nov 19, 2017
    Messages:
    12
    Likes Received:
    0
    Hey floh. Since I didn't really get much in the way of suggestions, I more or less set up what I had described above. I gave some thought to implementing Snapraid+mergerfs in a VM with disk pass-through but decided against it. Instead, I installed Snapraid and mergerfs at the host level. Containers are given access by way of bind mounts. I wasn't able to use the approach recommended by Proxmox to grant rights to users in containers so ended up using setfacl - see https://forum.proxmox.com/threads/w...-an-unprivileged-container.38898/#post-214852 for more details on that.

    In order to access from the Windows VM and other Windows clients on the network, I set up another container and created bind mounts in that container. Those bind mounts are in turn shared through Samba which is installed in the container.

    Not sure if it's optimal, but it seems to work reasonably well having regard to my objectives, so I likely would not change anything given my current equipment. All that being said, if I were to start back from scratch including equipment, I likely would have chosen a different set of equipment altogether - rather than one large box with many drives, I would have purchased multiple smaller nodes (at least 3) and set up redundancy at the machine level rather than at the disk level - perhaps using something like Ceph instead.

    Almost forgot - one problem I've encountered with my set up is I/O delays - writes to the drive array often get bogged down and things slow to a crawl. However, I think this is due primarily to physical architecture I use rather than how I set things up - everything goes through a single host bus adapter with SAS expander backplanes. I should have chosen a system that uses multiple HBAs with straight connections to the backplanes, as I think that's where things get bogged down.

    Hope this is of some help.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice