A question about storage design

FlyveHest

New Member
Sep 17, 2025
4
0
1
Hi all

Coming ProxMox user here, currently working out the storage design for a small ProxMox box I am building with at couple of friends.

Its going to run a few VMs people can use for various Linux tasks and fooling around, some containers like Immich, and then run a few dedicated gaming servers as well.

I'm currently looking at purchasing a small SATA SSD to be used as a boot drive, 2 M2 disks to be used for VM images and container filesystems, and then a couple of HDDs for slow storage of images and other static data.

I've read the docs about storage, but I am having a bit of a hard time deciding how to lay it out.

So, my first question is, is it possible to create a volume group of the 2 M2 discs, and in that group create a RAID1 block based logical volume for VM images and another RAID1 logical volume used for files, probably using BTRFS? (Something like a 50/50 split of the storage)

I am interested in having all the data mirrored across the 2 M2 discs, but I haven't been able to determine if LVM allows this.

My second question, for the data drives, which will most likely be BTRFS RAID1, is it possible to mount folders from that into a VM, or should those be shared with for instance NFS from the ProxMox host to the VM?

As far as I can tell, bind mounting folders into containers is possible as is, but i'm not entirely sure if direct file access on the host can be achieved?


Thanks in advance :)

/F
 
data mirrored across the 2 M2 discs, but I haven't been able to determine if LVM allows this
Proxmox doesn't support RAID with LVM. Only ZFS and BTRFS are supported by Proxmox for RAID.
and ZFS/BTRFS/Ceph and any RAID require datacenter drives if you really care of your data.

For Homelab, I should use backup before RAID.
With PBS , daily backup on the second disk is really fast , even hourly can be scheduled for critical data.
 
Last edited:
Proxmox doesn't support RAID with LVM
Oh, I thought LVM supported RAID directly, but it has to be supported by ProxMox as well? (I found this post that mentions something like what I wanted to do)

I'm not really interested in using ZFS, so, can I use BTRFS volumes for VM images as well? According to the storage wiki it doesn't support block level type volumes.

I am planning on backing up critical data, but moving multiple 100 gigs offsite daily isn't going to be fast, no matter the drives I choose ;)
 
a couple of thing:
1. if you do wish to deploy lvm on mdadm, you can. it just requires a bit of "linux" setup. but you shouldnt, because
2. a zfs/btrfs mirror is seamless, has filesystem integration, inline compression, snapshots, etc. it is a superior option by any measure
3. in my view, zfs is the superior choice over btrfs both because it is older, more stable, in broader use, and because btrfs support by PVE's tooling is still considered beta.

The only time I'd consider lvm is when the backing store is a hardware raid. If thats an option with your hardware its worth considering, but point 2 stands even then.
 
  • Like
Reactions: Johannes S
I have three different Proxmox nodes. One is set up with the boot and rpool all on a ZFS mirror of two M.2 NVME drives. Another has a single NVME drive with the OS and all VM storage on the one NVME disk (ZFS stripe), and my "main" node has 4 enterprise SATA SSDs in two mirrored ZFS VDEVs (sort of the ZFS equivalent of raid 10, but not exactly). The first two machines are set up that way because of the limitations of the hardware.

In all three I have disabled the corosync, pve-ha-crm, and pve-ha-lrm services. This seems to reduce disk wear a bit. But I haven't had any issues with disk wear. I guess time will tell.

The other thing I will say is I keep my VMs very small, and store almost no data on Proxmox or my VMs at all. My average VM size is 32GB. All my important data sits on a separate NAS, where I do snapshots and backups outside of Promox. In VMs all the important data is is stored in NFS shares mounted via FSTAB to the NAS. In Docker (which is about 75% of my workloads), I use the Docker NFS driver to mount persistent volumes directly to the NAS. I also back up my VMs to the NAS on a regular basis as well. For my situation, this means I can replace all the drives in a Proxmox node if I wanted, do a fresh install of Proxmox, and have all my VMs, Docker containers, and data back up and running in under an hour.

I find that for the VMs and docker containers I run, I don't see all that much improvement/speed increase between SATA SSDs and M.2 NVMEs. I have done it both ways. Maybe that's because my data doesn't reside in Proxmox, not sure. But my network is all 10gbe, so everything seems to run fast enough for my needs. I would always put the OS on mirrored ZFS drives though, if at all possible.
 
2. a zfs/btrfs mirror is seamless, has filesystem integration, inline compression, snapshots, etc. it is a superior option by any measure
3. in my view, zfs is the superior choice over btrfs both because it is older, more stable, in broader use, and because btrfs support by PVE's tooling is still considered beta.

Case is, I am planning on user consumer grade NVMEs (Seagate FireCudas) and as far as I can read, running ZFS on non-enterprise SSD s is heavily discouraged. Its not entirely clear to me if BTRFS has the same usage linked challenges as ZFS on that grade of hardware, some say yes, some say no.

So now i'm thinking about just doing a mdadm mirrored setup on the NVMEs, partitioning using EXT4 and using directory based storage for VM images, as far as I can tell that is done using qcow2 disk images, and should work on a normal file system just fine.

I'm not really worried about performance, none of the VMs are going to be doing anything disk intensive, nor do I feel like I need to be able to do snapshotting either, even thought that should be possible, according to the wiki.
 
running ZFS on non-enterprise SSD s is heavily discouraged.
I think this statement is very controversial.
I agree - non-enterprise SSD s is heavily discouraged for enterprise systems, but ZFS....
I've never seen such recommendations for ZFS, and I find it difficult to understand the meaning of this statement.
As far as I know, ZFS performs significantly fewer disk operations than any other file system due to more efficient caching and efficient organization of written data streams. It's unlikely that this is the case, but this feature could theoretically mean that a ZFS-based drive will last longer, but not shorter.
The fault tolerance of ZFS backup algorithms is also significantly higher than that of other systems and many hardware solutions. Using ZFS increases the chances of recovering data from damaged arrays.
 
as far as I can read, running ZFS on non-enterprise SSD s is heavily discouraged

I agree - non-enterprise SSD s is heavily discouraged for enterprise systems, but ZFS....
Why is everyone using passive voice when recounting that "consumer drives are discouraged?" by whom? and under what circumstances? There is nothing INHERENTLY wrong with using non enterprise drives for zfs pools, as long as you understand the implications, namely:
1. zfs is write heavy. consumer drives have relatively poor endurance, and will not last as long as enterprise drives. If your use case is to hammer the filesystem 24/7, you might want to choose a different mechanism. if your use case is a homelab, the drives will outlive their usefulness.
2. consumer drives are not tuned for opportunistic garbage collection. this means they will periodically (and seemingly without reason) slow WAY down and then come back. For this and other reasons, they will not light your world on fire performance wise. If your use case demands high continuous iops, you will want a different solution. if its for homelab they work just fine.
3. consumer drives do not have PLP (power loss protection.) what that means is that they're susceptible to data loss on power failure, but understand the implications. the "data loss" is whatever was in the ssd precommit buffer- which means that on any modern cow or journaled filesystem it will simply be missing whatever writes were due to be written. If your use case is write-mission-critical, this is a direct concern. for home use... I think you might begin to see a pattern.

The fault tolerance of ZFS backup algorithms is also significantly higher than that of other systems and many hardware solutions. Using ZFS increases the chances of recovering data from damaged arrays.
Please enlighten; have you EVER recovered anything from a damaged zfs pool? ZFS is significantly more difficult data recovery wise then a traditional block raid+ filesystem. ZFS is a very resilient storage method, but if you get to the point when there aren't enough vdevs to start it is NOT easy to recover. moral of the story- make sure you have ample survivability and backups no matter what you do.
 
Let me start by saying that I have almost zero knowledge about ZFS, I have never used it on anything, so all my "knowledge" is from a couple of evenings googling and reading posts primarily from this forum (as my use-case for ZFS is ProxMox related), Stack Overflow and Reddit.

One important thing to mention is that when people either advocate for or against anything, its hard if not impossible to determine what their background is, so maybe someone commenting on ZFS use on consumer SSDs is a datacenter manager on a huge cluster with thousands of VMs and another is someone like myself just looking to run a single server with a couple of VMs and containers,

@alexskysilk Thank you for a pretty level-headed comment, i've collected some statistics from friends that are actually running a ProxMox setup that is pretty much the same as what I want, and the amount of writes they have over the last 6 months for a couple of VMs and containers is no where near being able to reach max usage of the consumer SSDs, and they are using a ZFS pool for main storage.

So, while I still think i'm going to go the more old-school way of just creating an software mirrored RAID and then putting an EXT4 partition on that, i'm not as "scared" of running ZFS on consumer hardware either.