Need help with confusion setting up multiple disk Promox to manage mass storage

winterwheat

New Member
Apr 1, 2026
3
0
1
Have just started. But this topic has proven a treacherous maze to research. I know it is bad form to lead with an apology, so I am sorry to do so. :cool: Having dropped down the Proxmox file and networking rabbithole, it has become apparent that virtualization with Promox has unique views compared to usual OS perspectives, even other virtualization architectures. Architecturally, Proxmox lives at the bare metal and virtualization level, not at the more usual application or user oriented level (IAAS: Infrastructure-only As A Service). Ah, yeah but ...

Proxmox has been successfully set up behind OPNsense running on bare metal as primary router and with OPNSense in a VM as the secondary router in front of a private network.

But Promox file handling has become a muddle of misunderstanding. Turnkey FS was a disaster that I suspect is obsolete anyway. Here's the Proxmox host (on a higher end miniPC) configuration:
1. 1TB ZFS on 1TB NVME SSD x 2TB NVME SSD paired - Proxmox can have all 1TB for Proxmox overhead and magic​
2. 1TB XFS on second partition of the 2TB NVME SSD (unpaired) - considered part of mass storage to be used for high speed media file cache downloads​
3. 2 x 8TB XFS SATA Disk mass storage in DAS enclosure on USB 4.0 40GB/s (unpaired) - getting about 200MB/s on each simultaneously​
4. 4 x 4TB XFS SATA 2280 SSD mass storage in individual cases on USB 3.2 at 10GB/s and USB 4.0 at 40GB/s (unpaired) - totally experimental using inexpensive but dubious SSD of Chinese origin but they each run at about 380MB/s​
5) If you trust the hardware, this is 32TB of external storage sustainable at about 2000MB+ over USB 3.2 and 4.0 (redundant use expected) and 3 TB of Proxmox host system NVME storage sustainable at 5000MB+/s.​
6) Still working through options to pair external mass storage in hardware and software but that is not the focus of the question since all 33TB of mass storage are currently empty.
7) Leaning towards BPS for Proxmox virtualization back up with MERGERFS and SNAPRaid fro the mass storage utilizing some kind of backup scheme primarily for media and IOT video data (this is a hugely over configured home media and IOT project to begin with).​
8) But only need to understand how to get 33TB of the mass storage integrated so Proxmox VMs/Containers(???) and external devices can use it:​

a) *** Need to know how to configure Proxmox host and matching VM internal configurations to support common/independent file systems(s) shared between VMs/CTs and external clients, without disturbing Proxmox's own virtualization storage or bricking Proxmox. ***
b) Would prefer to avoid relying on network file sharing on the Proxmox host for VM file sharing, but will fall back if necessary. (Even so the file share VM will need access to the files systems).​
c) Happy to put network sharing in a VM (OMV?) for external hosts (and VMs as need be) as the Turnkey FS LXC experience was brutal. Lots of physical console interventions to save Proxmox.​
d) The network IP policy (IPAM) calls for all devices and network based services be given unique IP address/DNS name which also reflect their type of service using network/16 addresses rather than relying on port addresses. This scheme is very strictly defined and tracked in admin records and with DHCP hosts configuration as well as internally on each device and service.​
 
Last edited:
I’m going to start off with an apology of my own ;)

Sorry, but let’s dumb this down for my own sake. Essentially you have a high end mini PC with a ZFS mirror for the OS, and a few DAS devices connected over high speed USB.

As far as I understand, you basically have a bunch of media and IoT data that you’re storing.

You tried to make TurnKey file share container but were unsatisfied with the performance. So are considering OMV or some similar VM to act as the file share.

So really you want the most reliable way to share those 33 TB for not only media and IoT data, but also traditional VM a container data without causing any conflicts.

Please correct me if I’m confused on any part of that.

Off the top of my head, a few things come to mind.

1. You may want to consider passing through the DAS devices to your file share VM (like OMV). With just a single node, running the OS drives of your VMs on the 1TB mirror local-zfs storage would be perfectly acceptable (unless you have more VMs than I’m anticipating). This would keep your media and IoT data separate from the VM disks themselves. Backing up the VM disks to PBS would be quite simple and lightweight. How you backup your media and IoT data could be entirely separate and would be totally up to you, I would not really recommend PBS for this use case. I’m not familiar with MERGERFS and SNAPRaid but I assume running these utilities in a VM would be more suitable than the PVE host itself. To access the media and IoT data, setup an SMB or NFS share to your VM.

2. On the other hand, if you are really wanting to use these DAS devices as Proxmox VM storage volumes, as well as your media and IoT storage, then yes that is going to be a bit of headache.

Best case, you would have to configure these various USB storage devices in some sort of storage pool with LVM (ZFS would be a stretch here). If you got that working reliably with all of the USB at play, then you could put a VM in this LVM storage pool, and provision a virtual disk with enough storage to house all of the media and IoT data. Would need to be careful not to over-provision this disk to ensure you still have space for future VM disks in PVE. Then setup an SMB or NFS share to access the media and IoT data. If you have a separate OS disk on your file share VM you could back it up with PBS, but the huge 20+TB virtual drive, probably not so much.

Hope that helps.
 
Hi winterwheat,

Welcome to the forums!
8) But only need to understand how to get 33TB of the mass storage integrated so Proxmox VMs/Containers(???) and external devices can use it:
The first stop for that is (not to dump your hardware on the forum and ask members to find a way to implement your vision, but) the documentation on storage. Start for example at https://pve.proxmox.com/wiki/Storage.

Are you familiar with Debian or Linux in general, or have you set up a more mundane Proxmox host earlier?
 
Hi winterwheat,

Welcome to the forums!

The first stop for that is (not to dump your hardware on the forum and ask members to find a way to implement your vision, but) the documentation on storage. Start for example at https://pve.proxmox.com/wiki/Storage.

Are you familiar with Debian or Linux in general, or have you set up a more mundane Proxmox host earlier?
I believe I wrote my first shell script in 1982 after several years avoiding UNIX in favor of Multics, and at one time had NASA and the US Airforce contacting me as the authority to port NASA's flagship Computational Fuild Dynamic software between half a dozen UNIX workstations, OS, graphics packages, and windowing systems I have also dabbled a bit in the guts of Busybee embedded Linux. I do remember the first release of Linux which was a puzzling toy at the time. One thing, I never mistook man pages in place of understanding the principles of operation after which all else is detail. I notice from plaintive comments and my own experience that Proxmox documentations seems drastically short on basic models to put it in context. AI Slop and awful Youtube videos just make it worse.

From a system architecture perspective, I stand by the central source of Proxmox confusion stems from the fact that Proxmox is the basis for implementation of the IAAS layer and no more. Historically Proxmox was NOT tor defining the network layer beneath. Although the later SDN addition changes that, very few will use it in a home environment. Most critically, Proxmox does not perform the services of the OS and application layer(s) above it including distributed (cloud) data management beyond its own very narrow and specialized internal data usage in a very narrow niche. Trust me, upon first contact the Proxmox documentation is clear as mud in that regard.

It is not intuitively obvious how the IAAS level integrates with virtual OS and distributed data management. I asked simply how to get the file storage into to the VMs without wiping out Proxmox - obviously it is a poor idea to pass the Proxmos system storage device to a VM and then wipe out Promox access. I understand network file sharing requires standard configuring of bridges and mount points accordingly. My attempts to use the device/FS/folder linkage mechanisms kept greying out in the GUI or not working, so there must be concepts that man pages don't clarify (to some/many of us) on their own. I apologize for the length of the hardware description and can remove it , but it does define the use case to be solved. Feel free to treat it as no more than an example or ignore it with the basic question left standing.
 
Last edited:
I’m going to start off with an apology of my own ;)

Sorry, but let’s dumb this down for my own sake. Essentially you have a high end mini PC with a ZFS mirror for the OS, and a few DAS devices connected over high speed USB.

As far as I understand, you basically have a bunch of media and IoT data that you’re storing.

You tried to make TurnKey file share container but were unsatisfied with the performance. So are considering OMV or some similar VM to act as the file share.

So really you want the most reliable way to share those 33 TB for not only media and IoT data, but also traditional VM a container data without causing any conflicts.

Please correct me if I’m confused on any part of that.

Off the top of my head, a few things come to mind.

1. You may want to consider passing through the DAS devices to your file share VM (like OMV). With just a single node, running the OS drives of your VMs on the 1TB mirror local-zfs storage would be perfectly acceptable (unless you have more VMs than I’m anticipating). This would keep your media and IoT data separate from the VM disks themselves. Backing up the VM disks to PBS would be quite simple and lightweight. How you backup your media and IoT data could be entirely separate and would be totally up to you, I would not really recommend PBS for this use case. I’m not familiar with MERGERFS and SNAPRaid but I assume running these utilities in a VM would be more suitable than the PVE host itself. To access the media and IoT data, setup an SMB or NFS share to your VM.

2. On the other hand, if you are really wanting to use these DAS devices as Proxmox VM storage volumes, as well as your media and IoT storage, then yes that is going to be a bit of headache.

Best case, you would have to configure these various USB storage devices in some sort of storage pool with LVM (ZFS would be a stretch here). If you got that working reliably with all of the USB at play, then you could put a VM in this LVM storage pool, and provision a virtual disk with enough storage to house all of the media and IoT data. Would need to be careful not to over-provision this disk to ensure you still have space for future VM disks in PVE. Then setup an SMB or NFS share to access the media and IoT data. If you have a separate OS disk on your file share VM you could back it up with PBS, but the huge 20+TB virtual drive, probably not so much.

Hope that helps.
Thank you for your suggestions. To simplify, I believe the four external USB interfaces with 32TB of attached storage between them could be passed to a single VM running something like OMV or ZimaOS (gasp) and let MERGERFS and SNAPRAID live in there. Doing those two file system services on the Proxmox host would be interesting IF I could get file linkage into VMs without risk - would host based file system locking for data integrity be preserved between VMs? I think so, but not sure. It would also mean using the mechanism of passing by files system directory rather than device interface, which somehow I failed to do in testing.

The harder challenge comes down to trying to maximalize use of the 3TB of nmve SSD on the Proxmox host. These 3TB have to serve the 1TB internal storage needs Proxmox itself and it's immediate VM disks and VMs (which are basically VM disks). I believe (?), that a VM disk (living in ZFS mirrored pool) can be used effectively for downloading media content for caching and then offloaded to mass storage elsewhere (on the USB interface). It would never be backed up. Likewise, I would like to be able to use the non-redundant 1TB of XFS storage on the second partition of the larger SDD beyond the extent of the Proxmox half of that SDD to cut in half of the I/O required for sequential writes.