Choose between EXT4, XFS, ZFS and BTRFS ? Why ?

Kiyweo

New Member
Oct 15, 2023
2
0
1
Hi there!

I'm not sure which format to use between EXT4, XFS, ZFS and BTRFS for my Proxmox installation, wanting something that once installed will perform well and hold up.

I've heard that EXT4 and XFS are pretty similar (what's the difference between the two?).
ZFS and BTRFS have some similarities (ZFS having much more functionality, and BTRFS being in the "test" phase, not necessarily recommended).

Unfortunately, drowned in the mass of information, I can't find my way around.


My installation:

CPU: AMD Threadripper 3945wx
RAM: 32GB DDR4 ECC
Storage :
- 2 SSD NVMe 512GB
- 1 SSD SATA 1TB
- 2 HDD 1TB (and more in the future)


I plan to run a few VMs of the type:

- TrueNAS CORE (VM on NVME Proxmox)
PCI passthrought devices:
- 1 SSD NVMe 512go
- 1 SSD SATA 1TB
- 2 HDD 1TB (and more in the future for backups and archives)

- Windows 11 (VM on NVME Proxmox)
- Game/web/other server...


Merci à vous ! ;)

(sorry if my English is approximate)
 
The ability to take snapshots is incredibly useful. It's obviously convenient when you want to try out experimental changes that you might need to roll back. But it also helps a lot when making regularly scheduled backups, as you can take atomic snapshots of the state of the container or VM without having to power it down. I consider snapshots mission-critical for this very reason. And you can even use snapshots to give you a quick way to undo stupid mistakes (e.g. accidentally deleted files). Just write a script that takes hourly snapshots and makes them available in a hidden directory. This goes a long way towards peace of mind.

You have to give up on all of these useful features if you use one of the more traditional file systems. Having said that, if you did decide that snapshots aren't important to you, then I have had great luck with XFS for many years. It's an extremely mature filesystem that is actively maintained. Now, whether you'll notice the difference compared to ext4 is unclear. For most use-cases, both of these traditional file systems are probably sufficiently similar that you wouldn't make a huge mistake picking one over the other.

Of course, ZFS gives you a bunch of other features. It makes it very easy to configure RAID arrays, but that's not something you need to give up completely if you decided against ZFS. Both hardware and software (e.g. LVM) RAID work well on Linux. It has it's own sets of pros and cons when looking at some of the specifics, but there is nothing fundamentally wrong with any of these approaches.

The downside with ZFS (and to a lesser degree with the other filesystems) is that they tend to be write-intensive. There is a good bit of tuning you can do to minimize the amount of background write activity. But you'll still be looking at anywhere between hundreds of gigabytes and possibly several terabytes per day. The exact number is surprisingly difficult to predict and different users have very different experiences. Even minute configuration changes can dramatically change these numbers.

With "spinning rust", none of that matters much. It's not limited by number of writes. On the other hand, traditional media have very poor seek rates and number of operations per second. And that's going to be painfully obvious when using virtual machines. So, most users opt for solid state storage these days. And that can dramatically improve performance -- with the downside that you need have to watch the total amount of writes. SSDs only have a limited lifetime and once you write too much data they die.

That's why you see the often-repeated advice to buy enterprise-grade SSD storage instead of consumer models. It'll make a big difference in how frequently you have to deal with failed media. And honestly, even RAID isn't going to help much here. A good RAID solution will equalize the amount of writes across all media -- and that means, they'll likely all die in close temporal proximity. Get drives that are rated for a high number of total writes and actively monitor the remaining reserves. If you do that, then ZFS is going to be fine. If you don't do that, than ext4 or XFS might do a little better; but you just postpone the inevitable drive failure.

And whatever you do, make sure you have a RAID system with good redundancy and you also have backups, preferably on a separate set of disks.
 
Last edited:
Don't overlook lvm-thin. It looks like you intend passing through real block devices to a guest (NAS?), so may not have large guest storage requirements in the immediate term. You can take atomic snapshots with lvm-thin backed storage. Every time I have revisited ZFS, I have never been able to justify the cost and complexity for a home server.
 
Last edited:
indeed the default lvmthin is great.
truenas require zfs and it's for enterprise with integrity requirements, which require datacenter ssd with PLP protection to allow reliability and durability.
passthrough disks with cli qemu set /dev/disks/... isn't recommended and you loose flexibility.
Only pci hba with attached disks can be passthrough to a one VM.
No problem here, with vDisk use 90% of the real disk and not yet see a bottleneck.
(sorry for my wording.)
 
As far as I know, I don't plan to use ZFS on my main ssd (on which proxmox is installed), so it's between XFS and EXT4 for my use case.
It remains to be seen which would be the most stable and performant for running my VMs and a few LXC containers.

BTRFS looks super interesting, but unfortunately not yet mature enough to be used from what I can understand (unless you're not afraid of reinstalling proxmox).
What's more, performance doesn't seem to be up to scratch at the moment, so you might as well use ZFS.

I've come across a little page from RedHat on "How to Choose Your Red Hat Enterprise Linux File System", now I just need to understand it properly (and translate it into French :') ).
https://access.redhat.com/articles/3129891

To secure VMs and LXC a minimum, I'm thinking of backing up once a week on one of my HDDs (in Passthrought on the virtualized NAS).

For those who have experience or knowledge with EXT4 and XFS, I'm a taker.

Thank you 4, your explanations are very enriching ❤️
 
Last edited:
The root volume (proxmox/debian OS) requires very little space and will be formatted ext4.
See this. It explains how to control the data volume (guest storage), if any, that you want on the system disk.
I am not sure where xfs might be more desirable than ext4. Maybe a further logical volume dedicated to ISO storage or guest backups? But not live guests. Being on an fs, they would then need to be qcow file based which, AIUI, is the least common/optimum choice.
 
Last edited:
  • Like
Reactions: _gabriel
I used Ventoy in UEFI mode; Proxmox 8.0.1 image works each time; while 8.1.1 and 8.1.2 get missing ISO message each time.
As 8.0.1 works, and it was placed on ventoy in the same way as later versions, I can say that something went wrong after 8.0.1
 
I used Ventoy in UEFI mode; Proxmox 8.0.1 image works each time; while 8.1.1 and 8.1.2 get missing ISO message each time.
As 8.0.1 works, and it was placed on ventoy in the same way as later versions, I can say that something went wrong after 8.0.1

I do not see, how this is related to this thread, but update your Ventoy drive to (at least) 1.0.97:
https://github.com/ventoy/Ventoy/releases/tag/v1.0.97
 
is ZFS better than ext4 for the boot drive regarding SSD wear due to less writes? I would assume yes, because it handles more in RAM cache, I might be wrong though, thoughts? Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!