Storage/Filesystem recommendations for new Proxmox user

DarkSpark

New Member
Oct 20, 2025
9
0
1
Hardware:
2 x 500GB sata ssd's (WD Red SA500)
4 x 2TB nvme ssd's (WD_BLACK SN850X)

Current plan:
Install proxmox in zfs raid 1 on the two sata ssd's.
Use the other 4 nvme drives to create storage for VMs (docker, etc) / CTs (pihole, etc) / smb (probably open media vault)

What I'm struggling with is how to setup my storage. My 3 main goals with the data storage are:

1. Have some sort of bitrot protection, at least for the smb share because I plan on storing family photos/videos there (and yes this will be backed up to a separate machine).

2. Avoid too much write amplification

3. Make maintenance / recovery simple


I'm thinking of setting up open media vault VM and the data store for OMV in virtual disks on the node data storage array. This will hopefully make moving/recovering/backing it up very simple.

I've tried to research zfs as that seems to be the favored filesystem for proxmox, but there are a lot of posts about how it's both highly customizable and how the best way to set it up depends on your exact use case. I'm worried about write application and honestly find zfs a little more than I'd like to try and learn right now, otherwise I could just use zfs raid 10 and call it a day. Open media vault uses btrfs in order to provide previous file versions to users using snapshots (I think), but since btrfs is a CoW filesystem, I've heard it would be bad to put it on a qCoW. But would CoW on Cow be alright if setup btrfs in proxmox for those 4 drives?

Basically I've done a bunch of research, but I think I'm just confusing myself at this point.


How would you achieve these goals? And am I even in the ballpark of a viable solution?
 
Last edited:
What are the drive models? I would not recommend BTRFS as it's a tech preview (that also means less support/help) and BTRFS on the node would not provide OMV access to it directly so you still need to have BTRFS inside and thus have CoW on CoW. You could also give the whole disks to the VM which can then manage BTRFS but that means you can't manage the storage via PVE any more. I don't like giving total control over a resource to a guest. Is the previous version feature a hard requirement?
 
Last edited:
What are the drive models? I would not recommend BTRFS as it's a tech preview (that also means less support/help) and BTRFS on the node would not provide OMV access to it directly so you still need to have BTRFS inside and thus have CoW on CoW. You could also give the whole disks to the VM which can then manage BTRFS but that means you can't manage the storage via PVE any more. I don't like giving total control over a resource to a guest. Is the previous version feature a hard requirement?
Good catch, I guess that leaves me using mdadm or LVM/LVM-Thin to create the RAID 10 on the host (if RAID10 is a good option here).

The drive models have been added to the original post.

I'd also like to avoid giving control of the drives to the guest. The previous version feature is not a hard requirement, just a nice to have.
 
Last edited:
I would not recommend BTRFS as it's a tech preview (that also means less support/help)
Thats not what it means. it means that the PVE devs say "we havent tested this as completely as other options, and we havent included controls for all its functionality." BTRFS is fully supported, just that you'd need to go to cli for some/much of the functionality- which is to say, outside the PVE toolset. BTRFS works well as a mirror, or striped mirrors; do not use it in a parity raid config. BTRFS has all the features of ZFS PLUS out of band dedup, in place compaction, per-subvol CoW, and other features that zfs doesnt. Its also more performant in low resource environments.

and BTRFS on the node would not provide OMV access to it directly
I dont actually understand what that means- please elaborate.
so you still need to have BTRFS inside and thus have CoW on CoW
No, you dont. "inside" you can and should use the same filesystem options as you would with zfs or any other block fs (or maybe I dont understand what you're trying to say.)

On balance, I'd use BTRFS on any small (read: homelab) system with two drives used for the virtual disks. For anything larger use ZFS.
 
It means less support because fewer people use it. Just because you have BTRFS on the node does not make the virtual disk on it you give to OMV also BTRFS. So if you want OMV to use BTRFS (for the previous version snapshot stuff) you need to format the disk inside as BTRFS. That would then be CoW on CoW. I'm not sure how to explain this better.
 
Last edited:
  • Like
Reactions: Johannes S
@Impact, it seems well explained to me. Given that you now have the drive models, and that the previous version feature is not a hard requirement, do you have any ideas on how I can achieve what I'm looking for?
 
That explanation was for the person above the post, not you :)
I gave you all the options and my preferences but I can't decide this for you. MDRAID would work too but Proxmox does not recommend MDRAID.
To reiterate. My recommendation is to use ZFS and let the ZFS pool be managed by the node and then give a virtual disk from there to a guest to provide shares.
A VM (ZVOL) could use ext4 on its virtual disk or you have a ZFS dataset when you create a CT. The latter is faster/better from a storage standpoint.
What you use inside to provide the shares is up to you. Cockpit would be one choice that works fine in a CT or you can use OMV in a VM but with ext4 rather than BTRFS. Just be aware that ZVOLS can be punishing.
You can also use BTRFS instead of ZFS but I haven't used that in a decade and don't recommend it because above.
 
Last edited:
Just because you have BTRFS on the node does not make the virtual disk on it you give to OMV also BTRFS.
Thats true for any virtualized environment. A nested NAS will always incurr a penalty; as you mentioned, CoW on CoW kills performance and destroys space efficiency. To avoid this, dont have a nested NAS on your hypervisor- install your NAS on the metal. Both OMV and Truenas have some virtualization/docker support.
 
  • Like
Reactions: Johannes S