RAID1 for homeserver, ZFS, BTRFS or ext4 ?

Nov 14, 2024
12
3
3
Hi, everybody,
I own a Terramaster F4-424 Max with following storages installed:
2x 1TB M.2 NVME SSDs
4x 3,5" HDDs
1x 32GB NonECC RAM

My plan is to use the 2 NVME SSDs as a RAID1 for the OS and a couple of VMs. One VM shall be configured for OMV using the 4 HDDs via PASSTHROUGH.

My question is, what filesystem would be suitable for my scenario?

- ZFS: I tried to install a ZFS RAID1, but for some reason the system doesn't boot.
- BTRFS: I installed the system on BTRFS RAID1, this works, but I'm not sure if it's a good idea, cause BTRFS is marked as "Technical Preview".
- ext4: ext4 is the default fs for ProxmoxVE but I haven't tried this configuration yet. Would it be a good idea to install a recent debian on a software RAID1 with ext4 + LVM and install ProxmoxVE as mentioned here
- Or I skip the idea of RAID1 and use the default ext4 + LVM setup, 1 NVME SSD for the OS and the other for VMs .
https://wiki.debian.org/DebianInstaller/SoftwareRaidRoot
https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_12_Bookworm


What do you think about these scenerios, any suggestions, any different scenerios?

Regards Pete
 
Last edited:
Hi, everybody,
I own a Terramaster F4-424 Max with following storages installed:
2x 1TB M.2 NVME SSDs
4x 3,5" HDDs
1x 32GB NonECC RAM

My plan is to use the 2 NVME SSDs as a RAID1 for the OS and a couple of VMs. One VM shall be configured for OMV using the 4 HDDs via PASSTHROUGH.

My question is, what filesystem would be suitable for my scenario?

- ZFS: I tried to install a ZFS RAID1, but for some reason the system doesn't boot.

An error message would help in finding and fixing the root cause.

- BTRFS: I installed the system on BTRFS RAID1, this works, but I'm not sure if it's a good idea, cause BTRFS is marked as "Technical Preview".
- ext4: ext4 is the default fs for ProxmoxVE but I haven't tried this configuration yet. Would it be a good idea to install a recent debian on a software RAID1 with ext4 + LVM and install ProxmoxVE as mentioned here

This would work but is not offically supported.
The only fully supported option is ZFS, btrfs is still technology preview:
https://pve.proxmox.com/wiki/Software_RAID

What do you think about these scenerios, any suggestions, any different scenerios?

First RAID is not a replacement for backup but to reduce downtimes in case of an error. So it's really a good question whether you actually need it.
An alternative way might be to use one NVME for the operating system and another one as vms space.
ZFS is way more flexible and offers more features (including bitrot protection, replication to remote storages, software raid etc) than ext4+lvm but you need to consider that ZFS plus Proxmox VE also have a toll on consumer ssds. Are you planning to build a proxmox ve cluster later? Than ZFS is handy due to it's storage replication. If not you might be better off in installing your operating system on a small LVM partition and using the remainder as a ZFS or LVM thin pool for VMs.

For the data hdds I wound't consider any file system than ZFS (bitrot detection and flexibility) it might be that another NAS OS than OMV might be better suited (as far I know you need to install a plugin for ZFS support).
 
I'm running my proxmox home server on BTRFS.
Works flawlessly and GREAT ! Running snapper also no issues.
Don't run ZFS, needs too much memory imo and kills your ssd's !
 
I'm running my proxmox home server on BTRFS.
Works flawlessly and GREAT ! Running snapper also no issues.

What has snapper to do with OP questions? Do I miss something?
Don't run ZFS, needs too much memory imo and kills your ssd's !
That depends on the setting and used Hardware, your sentence isn't true as general advice.
For example btrfs raid5/6 is to have data loss issues..
 
What has snapper to do with OP questions? Do I miss something?

That depends on the setting and used Hardware, your sentence isn't true as general advice.
For example btrfs raid5/6 is to have data loss issues..
Nothing just a remark as it is very usefull for making snapshots !
BTRFS has reallyt matured over the last few years and I have not seen any dataloss issue.
Sorry if you feel offended, I'm just sharing my experience...
 
Last edited:
Nothing just a remark as it is very usefull for making snapshots !

Well ZFS, btrfs and lvm all allows snapshots and there are also other Tools for snapshots ( e.g. timeshift which works with btrfs or rsync) so I still fail to see the relation to the OPs question.
BTRFS has reallyt matured over the last few years and I have not seen any dataloss issue.


Sorry if you feel offended, I'm just sharing my experience...

I don't feel offended personally. But I take big offense in offering potential disastrous advice for a filesystem still considered experimental by Proxmox developers and known Problems for some raid levels with nothing better than anecdotical evidence as pro-argument
 
Well ZFS, btrfs and lvm all allows snapshots and there are also other Tools for snapshots ( e.g. timeshift which works with btrfs or rsync) so I still fail to see the relation to the OPs question.





I don't feel offended personally. But I take big offense in offering potential disastrous advice for a filesystem still considered experimental by Proxmox developers and known Problems for some raid levels with nothing better than anecdotical evidence as pro-argument
Nothing better than honest advice from someone who has actual experience!
Read the latest btrfs docs more carefully as what you are referring to is outdated info!
 
What I have done in the past with a similar setup is ZFS for the Proxmox install, and then BTRFS for the pass through drives. The only thing is, I don't use BTRFS to create a RAID of any type. I follow the example of what Synology does, and I create my raid array with mdadm, then format that device with a BTRFS file system. Easy to do in OMV if you install OMV Extras and then use the md plugin (openmediavault-md 7.0.2-1). All of this can be done in the gui once you install the needed plugin. OMV extras can be found here https://wiki.omv-extras.org/ You can also download a ZFS plugin for OMV if you want to go that route.

As far as ZFS not booting in a raid 1, did you go into the BIOS and change the boot order? That is required for sure.
 
  • Like
Reactions: intrax
I have been using btrfs for about ten years, having read about the problems with parity raids I have never even tried them, there are those who use them and to have less risk use raid1, raid1c3 or raid1c4 on the metadata, but I would recommend not using raid 5/6 at all. I only use raid1, years ago there were problems that made it more difficult to manage it in the case of degraded raid, but now they have been solved.
I think btrfs could be useful for root, but I would not recommend it for vm disks for most cases, for example I use lvmthin for vm disks, it has less impact on the disks (I use consumer ssd) rather than having to go through a double filesystem (on the host as well as the internal one used in the vm).
If for ovm uses the disks directly it could be even better and it could also use btrfs if wanted, I think excellent for fileserver, the important thing is to be careful with the management of snapshots (if you use them), especially for space and then with hdd also for performances in the medium/long term, as I have seen if there are large quantities of data and changes they impact a lot (for fragmentation) after 1-2-3 years I saw is also noticeable for small operations and if you defragment you have to take into account that you lose the deduplication (in the presence of snapshots) occupied more space. If you don't use snapshot or only temporary/short for backup operations, you can set autodefrag. There is also the compression useful for space and in some cases for performance.
Hope these infos can be useful if btrfs will be used in some parts.
 
Thank you all for your helpful replies, but I see it's not easy to decide...

I think for my (homeserver-) scenario it would be more suitable to configure both NVME SSDs without RAID using the default fs ext4. Because availability is not that important, I have to do backups in any case:

- First NVME for ProxmoxVE and Docker
- Second NVME for Virtual Machines
- 4 x HDDs used as NAS/storage

But over all I'm not sure which setup I should use for the NAS/storage, my thought's on that:
I'm an experienced linux user, but not a pro, therefore the setup should be reliable easy to maintain and not too complicated to configure.

To have redundancy and little performance gain compared to RAID1 I prefer RAID5. As I learned, BTRFS is not good for RAID5 I have to use ZFS (i.e. TrueNAS, it supports ZFS).

The more I dive into the stuff the more another question arises. I think it's better to open a new thread for it.
The question is, is it a good idea to use ProxmoxVE to setup a virtualized NAS with HDDs via PASSTHROUGH at all?
https://forum.proxmox.com/threads/truenas-on-proxmox-good-idea.157904/post-723211

Regards Pete
 
Last edited:
  • Like
Reactions: Johannes S
- 1. NVME for ProxmoxVE and Docker
- 2. NVME for Virtual Machines

But over all I'm not sure which setup I should use for the NAS/storage, my thought's on that:
I'm an experienced linux user, but not a pro, therefore the setup should be reliable easy to maintain and not too complicated to configure.

To have redundancy and little performance gain compared to RAID1 I prefer RAID5. As I learned, BTRFS is not good for RAID5 I have to use ZFS (i.e. TrueNAS, it supports ZFS).

Raid5s Equipment aka RAIDz got flag here in the past due to it's Import on performance. It also needs at least three drives.
If you have three drives:
I would setup LVM+ext4 on the OS drive and a ZFS mirror on the two other nvmes.
I woukd need to do backups in any case and Proxmox VE OS plus config restore doesn't take long in case of an error.
Another variant would be to partition the three drives in one OS and one data Partition and build two ZFS mirrors from it.

For two drives I would stick with one ZFS mirror for everything.