Btrfs vs ZFS on RAID1 root partition

Sep 14, 2020
57
6
13
47
There are many differing opinions. There is no perfect file system. But now that Btrfs is also built into the Proxmox installer, could anything have improved?

For OS-only use, putting most of the load on Ceph (VMs and CTs), wanting to prioritize performance, but mainly data security, high availability and fast troubleshooting (disk failures and others), what would be the best filesystem today to choose when installing a new node?

I'm thinking of creating a RAID 1 with two disks just for the operating system, with the aim of making it last a long time. Now it remains to choose the file system.

Anyone who can help, I appreciate it!

Thanks!
 
I'm thinking of creating a RAID 1 with two disks just for the operating system, with the aim of making it last a long time. Now it remains to choose the file system.

Anyone who can help, I appreciate it!
ZFS, BTRFS will have problems with a failing device and booting, as discussed here. BTRFS is still not as mature as ZFS.

If you also choose hardware RAID vs. software RAID, we always go with hardware RAID, often 3-disk-raid1 and default LVM from the installer and often the default for "off-the-shelf"-servers.
 
I've had ZFS RAID-1 failing before on OS boot drives. It shows the zpool as degraded. It still boots.

BTRFS RAID-1 is another story. If a drive fails, you are toast if just using 2 drives. Use it when you don't care about booting.
 
ZFS, BTRFS will have problems with a failing device and booting, as discussed here. BTRFS is still not as mature as ZFS.

If you also choose hardware RAID vs. software RAID, we always go with hardware RAID, often 3-disk-raid1 and default LVM from the installer and often the default for "off-the-shelf"-servers.
Thanks for the comment.

Buddy, I haven't had any problems with ZFS booting even when one of the disks is broken. That is, even though the RAIDZ mirror is degraded, I can boot Proxmox. I did this test a few times.

It is true that with Btrfs I had this problem. Once the RAID1 Btrfs array is degraded, I cannot boot without human intervention to tell the system to mount the partition even though it is degraded.

However, this problem did not appear for me in the case of ZFS RAID1.

Is this problem itself decisive for opting for ZFS over Btrfs?

I've been thinking about not using RAID1 via hardware (am I making a mistake?) because in case of hardware failure, if I need to migrate the disks and read the data on another computer, I can only do it if there is a compatible RAID controller on the new computer. In addition, ZFS according to what I have read, has more layers of data protection. It would have valuable features for data integrity, recovery from hardware failures, crashes or power loss, verification checks and other features that I would not have in a simple hardware RAID1 with Ext4 for example.

Is this correct? Or do you think differently?

Thank you for your opinion!
 
It is true that with Btrfs I had this problem. Once the RAID1 Btrfs array is degraded, I cannot boot without human intervention to tell the system to mount the partition even though it is degraded.
This is apparently normal for BTRFS but I don't know why. You can add the settings beforehand to have the same behavior as ZFS (when you want to keep booting when an array is degraded).
Is this correct? Or do you think differently?
Data integrity is indeed better checked with both BTRFS and ZFS. RAID in general prevents instant failure of the system when (a certain number of) drives stop working, but only the check-summing of BTRFS and ZFS detect corrupted data (while a drive is slowly failing).
Thank you for your opinion
Power Loss Protection of enterprise SSDs or battery backup of RAID controllers really help with keeping the filesystem working when there is an unexpected power failure or sudden reboot. An added bonus is that sync writes (usually slow but essential for filesystem metadata) can be cached and are much faster (and with less write overhead).
 
This is apparently normal for BTRFS but I don't know why. You can add the settings beforehand to have the same behavior as ZFS (when you want to keep booting when an array is degraded).

Data integrity is indeed better checked with both BTRFS and ZFS. RAID in general prevents instant failure of the system when (a certain number of) drives stop working, but only the check-summing of BTRFS and ZFS detect corrupted data (while a drive is slowly failing).

Power Loss Protection of enterprise SSDs or battery backup of RAID controllers really help with keeping the filesystem working when there is an unexpected power failure or sudden reboot. An added bonus is that sync writes (usually slow but essential for filesystem metadata) can be cached and are much faster (and with less write overhead).
So, but I don't have the budget for enterprise class SSD's to use for booting. My RAID controllers also don't have battery backup. So I want to trust that RAID1 with BTRFS or ZFS can give me some protection. They can? Which would be better? I hear that ZFS is bad that it consumes a lot of hardware resources, RAM memory or I also hear that it slows down. Would it be too critical? Because I never noticed anything too serious about it. At least from the use I make, that is, on the operating system disk.

Thanks!
 
I recommend buying small used enterprise SSDs, which go here for about 20 euros. That is cheaper than a new consumer SSD.
For 20/30 euros I want it too. Do you find shipping to Brazil? If you know how to indicate a link... I have paid that price (30 euros) in the consumer here.

Here I don't find any used SSD for data center. When I find it, it's new and very expensive.
 
Last edited:
For 20/30 euros I want it too. Do you find shipping to Brazil? If you know how to indicate a link... I have paid that price (30 euros) in the consumer here.

Here I don't find any used SSD for data center. When I find it, it's new and very expensive.
In my last build, I used Samsung 863 ... maybe you find them on ebay ...
 
I recommend buying small used enterprise SSDs, which go here for about 20 euros. That is cheaper than a new consumer SSD.
And you get new enterprise cheap too. A new (OEM) Samsung PM883 240GB only costs 42€. If you can afford buying a server and paying for its electricity bill, then 42€ for a proper SSD shouldn't be a problem...if you aren't willing to pay that because there are crappy QLC consumer SSDs for 14€, then you are saving in the wrong place.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!