LXC MergerFS, SnapRaid

voarsh

Member
Nov 20, 2020
218
18
23
27
Hi.
Has anyone successfully setup MergerFS and SnapRaid on an LXC?
I'm hoping to have two disks mounted on an LXC. It will also have /mnt/disk1 /mnt/disk2, /mnt/storage and /mnt/parity1

The idea is to multiple disks shown as one. Most people do this with VM's and LXC's, but Proxmox LXC's aren't quite the same as LXC's outside Proxmox.

One thing that I am not sure about is MergerFS's fstab settings.
(Example of fstab)
# drive entries below abbreviated for formatting purposes /dev/disk/by-id/ata-WDC_WD60...449UPL-part1 /mnt/parity1 ext4 defaults 0 0 /dev/disk/by-id/ata-WDC_WD60...V3-part1 /mnt/disk1 ext4 defaults 0 0 /dev/disk/by-id/ata-Hit...11YNG5SD3A-part1 /mnt/disk2 xfs defaults 0 0 /dev/disk/by-id/ata-WDC_WD...32015-part1 /mnt/disk3 xfs defaults 0 0 /dev/disk/by-id/ata-TOSH...3544DGKS-part1 /mnt/disk4 xfs defaults 0 0 /dev/disk/by-id/ata-WDC_WD...074096-part1 /mnt/disk5 xfs defaults 0 0 /mnt/disk* /mnt/storage fuse.mergerfs direct_io,defaults,allow_other,minfreespace=50G,fsname=mergerfs 0 0

To show up as:
/dev/sde1 5.5T 3.1T 2.1T 60% /mnt/parity1 /dev/sdh1 5.5T 3.1T 2.1T 60% /mnt/disk1 /dev/sdf1 2.8T 1.4T 1.4T 51% /mnt/disk2 /dev/sdg1 2.8T 2.1T 643G 77% /mnt/disk3 /dev/sda1 2.8T 2.1T 648G 77% /mnt/disk4 /dev/sdd1 2.8T 2.2T 641G 78% /mnt/disk5 mergerfs 17T 11T 5.4T 67% /mnt/storage

I'd prefer to use LXC (if possible) because many of my applications utilise Mountpoints, and to not use mountpoints would become difficult, I am not sure how I would passthrough mountpoints in a VM.


---
Just another question, does anyone know if I am using MergerFS and SnapRaid, if all of my applications that will use the disk need to be on the same VM/LXC?
Say I have 2 LXC's writing to 1/2 of the mountpoints (disks), one has MergerFS and SnapRaid, the other doesn't .
If it needs to be all on one VM/LXC, is it possible to have MergerFS/SnapRaid on both LXC's/VM's?
 
Last edited:
I've used snapraid and mergerfs a while ago, but never on lxc. That said, I would be surpised if it does not work.

My 2 cents:
  • Make sure mergerfs uses the write policy you want it to use, and test it.
  • snapraid is very good for WORM write once read many. If you change files a lot, then the snapraid syncs will create a lot of workload on your disks.
  • snapraid is no backup
  • snapraid smart is funny, but overly pessimistic
  • Edit: don't use automatic snapraid syncs, unless you got a script that checks for errors.
if all of my applications that will use the disk need to be on the same
It might work if the one client/lxc only uses one disk or directory and snapraid is done in another lxc.
You do a sync maybe once a day or week, and snapraid does not really care where the data came from.

snapraid was developed for people who want to store lots of media files, which do not change.

For a flexible storage to use when files change a lot: Have a look at btrfs raid1.
Otherwise just use zfs.
 
Last edited:
This seemed to have died out, but I thought I would see if anyone might answer this question. I have a 4 bay USB connected drive enclosure. Proxmox sees all of the drives and play nicely. I decided to fire up Turnkey File Server and I did a bindmount to the drives attached to PVE server.

I am curious see try and see if I can install mergefs and snapraid on the Turnkey container and let it control the drives connected through the bindmount. I know that I can't use zfs in the scenario, so I am stuck with trying to be creative. I don't want to setup mergefs and snapraid on the host, but not sure if trying to MacGyver the container into something that is wasted effort.

Any suggestions would be greatly appreciated!
 
I know that I can't use zfs in the scenario
Yet the reason why you should not (you still technically can) use any kind of raid is that disks over usb suck and have random high delays. Using raid technologies inside of a virtualized system in which disks has been passthroughed is just beyond of what the system was designed for. Normally you use virtualization to abstract things, not hard-bind-mount it.

Better to use a dedicates NAS for the disks and use their NFS export.
 
Have you considered USB passthrough? That is probably something for a VM rather than LXC, if that is an option.

Regarding turnkey linux: I like the idea, unfortunately some of the templates are a bit out of date. And I struggled to update one of them. So I usually prefer a minimal debian and go from there. Openmediavault is also an option to check out.
 
I am curious see try and see if I can install mergefs and snapraid on the Turnkey container and let it control the drives connected through the bindmount. I know that I can't use zfs in the scenario, so I am stuck with trying to be creative. I don't want to setup mergefs and snapraid on the host, but not sure if trying to MacGyver the container into something that is wasted effort.
If you're using an LXC for your bindmounts (which you seem to have referenced), why not use mergerfs and snapraid on the hypervisor host system and then mount the mergerfs directory on your LXC?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!