Q: BTRFS <> comments/thoughts how it is working in proxmox - anyone - ?

fortechitsolutions

Renowned Member
Jun 4, 2008
447
51
93
Hi, I am curious to ask. I've bumped into BTRFS over the last few years and most of the time avoided it in favour of EXT4. Just because - use case was not there, and trust was also not yet there. I'm now mindful that BTRFS is less young and sharp on the edges maybe (ie, less prone to randomly lose data or fail horribly?) :)

so I'm curious if anyone has comments, thoughts, from real world use / testing?
- apparently BTRFS is now option supported in Proxmox since v7 platform, ie, more than a year now
- I have not played with it yet anywhere
- it appears to be easy enough to use, either for root proxmox filesystem, or I assume also as a local storage filesystem that gets used for VM image storage
- and I think based on reading in the proxmox wiki, it is trivial enough to have filesystem level compression features enabled (ie, as per discussion here > https://pve.proxmox.com/wiki/BTRFS )
- so I am just kind of curious, to hear real-world experience comments etc?
- does anyone have proxmox btrfs experience >3 mo of active real world use? Any comments? "it just works" ? or "OMG Run away" ? or something else?
- anyone doing compression at filesystem level, and comments on performance impact? (ie, 'fine if you don't care your io performance is garbage? vs pretty ok if you have some spare CPU cycles and modest workloads?)

The use case I'm looking at - is a small proxmox cluster (3-4 node) with one node intentionally on larger slower local storage (2x2Tb SATA Disk) where I can put archival storage things / and low speed / low priority IO things. like misc old user archive data sitting in an NFS tank - VM on this proxmox node - accessible to other VM in other proxmox node in same cluster.

If the thing was doing compression under-the-hood it would be nice to 'just take care' of stupid large compressible content transparently.

I'm not looking to do ZFS here since the server config is not really suitable for ZFS (ie, pair of disk only, modest 32gb ram, modest 4 core 8 thread cpu, and I don't want extra drama of ZFS needs CPU:RAM:MoreCacheDiskStuff to make it work 'really well'.) - so - I kind of have the feeling btrfs maybe can do 'lower drama' compression enabled / snapshot enabled / err checking enabled filesystem. But not 100% sure I believe it yet :)

hence this query

thank you if you read this far!

Tim
 
I'm not looking to do ZFS here since the server config is not really suitable for ZFS (ie, pair of disk only, modest 32gb ram, modest 4 core 8 thread cpu, and I don't want extra drama of ZFS needs CPU:RAM:MoreCacheDiskStuff to make it work 'really well'.) - so - I kind of have the feeling btrfs maybe can do 'lower drama' compression enabled / snapshot enabled / err checking enabled filesystem. But not 100% sure I believe it yet :)
What drama? I'm running ZFS on much, much weaker machines. ZFS is more mature than BTRFS (yet BTRFS is getting better) and we're talking about your data. We witnessed a BTRFS crash at a customer side this year and he has lost all stored data and needed to restore from an older backup. I tried it myself with PVE, yet more like a "does-it-work-as-it-should" test. We will not replace our years running PVE-ZFS boxes with BTRFS any time soon.
 
  • Like
Reactions: UdoB and Johannes S
Hi, OK - I will comment - I was not trying to get into discussion of ZFS, rather was trying to get info about BTRFS. But....

The last time I tested ZFS (definitely more than a few years ago), I found that there were 'behind the scene performance issues' associated with filesystem activity "some of the time" in ways that were not super obvious to me.. RAM consumption due to ZFS on the host for example appeared different than other proxmox hosts with boring EXT4. I had the feeling that in a 'boring and rigid config of inexpensive OVH server rental' with precisely this sort of configuration:

1 x quadcore xeon / 8 thread CPU // 32gb ram // 2 x SSD drives present.

...That getting a satisfactory config on ZFS which 'worked smoothly, for sure, above all else' was not simple. Note my vanilla OVH proxmox cheap box deploy is - boring MD linux SW raid mirror on SSD with EXT4 filesystem for boot, root, and pve-data. That is all. This works just great but - it is boring EXT4 filesystem.

Obviously I can also tweak this and use LVM on top of my MD raid for the VM_Data_Storage_volume. That is also OK. If I tried to splice in to a config with equally boring hardware and then use ZFS as filesystem for my 'slices mirrored' where VM data is stored, I did not find it was 'equally good at being good boring reliable consistent'. There were intermittent performance issues. Chasing it was drama, and ZFS forum support discussions tended to go down the rabbit hole of "Dude, you are running ZFS on poor hardware config, at least get some dedicated cache ZFS volume disk in there, and more ram ideally" and comments like that. Which is precisely what I don't want to do. I want to be able to have modest price OVH rental box. Running proxmox. On boring simple hardware. Which gives me nice boring reliable consistent filesystem performance.

So. I guess, if you want to recommend a config with ZFS where you are happy with consistent and reliable behaviour, and it works smoothly, on boring rigid config like I discuss here. I am very happy to hear your thoughts on - how much CPU:RAM is being eaten up on the host by committing to use ZFS in this sort of environment where you have (only) - 2 x SSD 480gig / 32gb ram / quadcore xeon CPU / no other drives / no cache dedicated for ZFS that is on a physically distinct SDD from your main volumes.

Ultimately I was trying to find out with this query if BTRFS is now mature enough to use as a replacement for EXT4 on proxmox, or not. Because it seems (For example) that "Synology" who is a vendor I ~trust due to other work I have done over the years - they now support and recommend BTRFS as their recommended pick for filesystem when I deploy a new synology NAS. And in the past that was not the case. But now it appears to be the case. And this led me to thinking "hey maybe BTRFS is not shit now? if Synology are willing to trust their support dollars / time effort into BTRFS in prod on their nas hardware?" so then I go looking to see if anyone has worked with BTRFS on Proxmox; and I find out wow is in beta since 1 version back, gosh I missed that one. Better learn how to read release notes a bit more attentively. And then of course, I post a query to the forum. to see if any real-world experience tells me "Arrgh run away, danger danger!" or - not.?

I have the feeling that ZFS is fine (or even good, or excellent - I just don't know -- can't say that from my own experience), if you are willing to have larger boxes (more CPU, more RAM, more disks present) and it is a more serious deployment config commitment. But most of the proxmox hosting boxes I'm doing are - very basic config things - running one or two modest VM - servicing requirements of a modest size team of humans (ie, maybe 2-5 people) who just want a 'nice boring reliable automobile K-Kar" kind of experience, ie, nothing fancy but it works, day in and day out, come hell or high water. And where the overarching goal is "zero admin human involvement required to make sure things run smoothly" as much as possible. Since my clients don't want to pay me to manage their server for day-to-day boring 'are the lights on' kind of questions. And I don't want to spend my time doing such 'door knock, status check, yup we are good, oops something is funny small debug needed whee just wasted 20-30m of fun learning debug time on a new fun exciting problem that I really don't have time to fix today' kind of stuff.

I am also still biased, because my original days of learning ZFS are back in the day when I managed solaris sparc servers, and ZFS on solaris was fine and dandy, and then I was happy to learn about ZFS on Linux/x86 port, and then learning of colleagues who had some 'oops ZFS fun testing early days" stuff where - I am sure possibly?their config was not perfect-great, and maybe other things went wrong, but their data went oops bye bye - and I know ZFS on X86 linux has matured a lot in the last ~15 years - but anyhow. I am still a tad delicate about ZFS on Linux. because losing data due to 'filesystem fun issues' is just too much of a pain in the butt to not remember when it happens.

thank you for the feedback on the thread.

Tim
 
  • Like
Reactions: waltar
Also for clarity, I am quite certain that there are excellent situations where ZFS makes great sense for proxmox. Clearly ZFS is integrated in proxmox for quite a long time now, is well supported, and lots of people use it. I've still remained under the impression (possibly purely my ignorance) it was best suited for 'more serious hardware config' where you had (for example) >32gb ram // more than 2 x SSD (total disks in system). Rather maybe at least 1 or 2 SSD (Modest size fine so long as good performance) for ZFS Cache read and cache write; ideally maybe? more than 2 x SSD for data block storage volume. Possibly separation / distinct drives if you have different data pools / OS vs data set / and non trivial workloads hammering the VM Data pool. But again if you wanted to comment and suggest "hey I use this config for this workload, nothing fancy for the hardware and it works just great" - I would love to hear what is your recommended zfs basic box build. Thank you!
 
Okay, thank's for the drama ;)
I have the same experience and feeling about BTRFS, yet both ZFS and BTRFS share a lot of similarities, so it is a good drop-in-replacement in the PVE sense if it comes of of beta / technology preview. BTRFS what created as a direct competition to Sun's ZFS build by Oracle and now Oracle has both.
However, RAID5/6 is still marked as unstable on the BTRFS status page, so unless you're running only RAID10 you may be cautious about BTRFS in such setups. That is one of the main parts why I don't use it.

for ZFS Cache read
We tried that and it was not that big of a difference. Having special devices is much, much better for the overall performance.

I would love to hear what is your recommended zfs basic box build.
"It depends" ... I'm running here an ODROID H2+ with two 960 GB Samsung Enterprise SSDs and 64 GB of RAM for about 30 virtual machines (mixed containers QEMUs). The machine is totally fine for me and is the smallest PVE box I have. We also have default minimal SSD-based 2 disk, 64GB PVE boxes at hetzner, working like a charm. We even had the same with enterprise hdds, yet that was a bit slow, but it worked for our requirements nonetheless. Our bigger ZFS boxes are all multiple shelves of enterprise harddisks with tripple mirrored special devices on ssds and dedicated SLOG devices. ZFS is also kind of vendor login, once started with ZFS and having continous replication in place, you will not switch to another storage technology. A couple of my collegues also run PVE on their laptops with ZFS, which also works quite well according to them.Those have also at least 16G RAM.

I also hope someone will chime in for experience with BTRFS on PVE. I don't read about it here in the forums very often ... or at all if I think of it. Maybe nobody uses it or they have no problems with it.
 
  • Like
Reactions: Johannes S
Hello Tim, why not try btrfs yourself as there are not really big reservations against and be after the btrfs expert here in forum ? :) What ever your test system produce you are not nailed down forever and if you are not satisfied you can switch to other storage setup also - so why the hell why not as even there are a lot of btrfs fans outside the proxmox world which would be enjoyed if it's working well ... and maybe more kick in to pve also :) Good luck ! :)
 
Hi, thank you for the replies! I agree that - once you get onto a working config (be it EXT4, ZFS, or other) - you need some motivation to change "if it ain't broke don't fix' sort of - to some extent that is my situation, I have a vanilla OVH Proxmox MD Raid config that 'just works fine' and I've been using ~same basic config for ~>5 years for various proxmox deploy scenario - so - all ok. But definitely I think next step it would make sense for me to actually get off my Butt and do a test deploy of Proxmox - BTRFS - and see how it goes. Out of the gate my desired config is 'nice and boring' - Raid1MIrror - so it seems this is the more solid BTRFS thing to test anyhow (ie, if Raid5/Raid6 BTRFS is maybe-not-as-safe yet?)

Anyhoo, next step is to dust off my testbox and run a test workload on Btrfs for a while and see how it goes. I'll post a followup here in a while to loop back on the thread. In case anyone else has BTRFS experience or comments to share, please do post here so we harvest a bit more 'real world report info' maybe? :)

Thank you!
 
  • Like
Reactions: waltar
As a footnote. To some extent I feel like the advent of the excellent PBS feature set - has made some of this stuff less pressing to some folks (ie, me and my clients for sure). Generally my client base are happy if I've got them setup on a "Primary proxmox" which is backed up to a local PBS <> and then has a PBS_offsite replication <> optionally with offsite PBS "Cold DR" node for the few clients who are serious-keen about some kind of DR setup. Most of my clients are not so serious, they have a prod Promox node / good PBS Backups / and generally it works smoothly and reliably and life is good. I don't think I will need realtime volume replication between proxmox nodes (ie, ZFS pool replication for example) anytime soon if ever given how trivial the PBS backup <> PBS restore if needed model is. Obviously a different kind of recovery time scenario than realtime/or async batched filesystem level replication - but - most of my clients are small enough they are not fussed / current setups are nice-solid-good. anyhow. I guess to some extent that is part of the fun with Proxmox and all these tools - you have a very wide range of feature set - and simpler use cases deployment are - well, just simpler, and they get plenty of work done -- and then on the other hand there are much larger deployments with more complex use case - and those are great too, but just different in what features are specifically used/needed to support that. So huzzah for amazing flexible toolkit. (ie, Kudos again to team @ proxmox).
 
In case anyone else has BTRFS experience or comments to share, please do post here so we harvest a bit more 'real world report info' maybe?
I didn't feel comfortable deploying BTRFS for PVE until very recently; and when I say "comfortable" I mean for lab deployment.

Its been working well enough within its limited scope of use; subvolumes work correctly, snapshots work correctly, inline compression seem to work as well. I think its fair to say that a lot of corner cases have been solved or at least mitigated to the point you're not likely to run into them.

All that said- It seems like BTRFS is running a never ending catch-up that I'm really wondering what its actual purpose is anymore. ZFS may be more RAM hungry, but RAM is relatively cheap and plentiful in 2024. The ONLY feature it offers over ZFS is that a BTRFS filesystem can be compacted (and out-of-band dedup) but this doesn't apply to subvolumes, and without parity raid features its inferior as a filestore.

ie, if Raid5/Raid6 BTRFS is maybe-not-as-safe yet?
No. and it probably never will be since none of the devs seem to have any need for this. SOMEONE will need to step up to fix the code, but this is NOT a trivial task and require a real investment of truly skilled (read: valueable) manpower; since this is already served by ZFS (and maybe more to the point, ceph but that's a separate discussion) Its hard to imagine anyone will.
 
  • Like
Reactions: Johannes S
The main feature of btrfs is the licence meaning it's good in cases projects don't want to risk potential legal trouble due to the ZFS/Oracle Situation. This is not a diss: It's unterstandable to be better safe than sorry be it in terms of data security or legal concerns.
 
The main feature of btrfs is the licence
Oh ffs not this again. I literally dont know why this doesn't just die the death it deserves.

Oracle is NOT an issue, since the zfs fork openzfs is based on is CDDL licensed. The "conflict" is using CDDL derived code within a GPL licensed kernel; Here is the ACTUAL license for openzfs you can hand over to your lawyer since you're so concerned:

https://github.com/openzfs/zfs/blob/master/LICENSE

But dont take my word for it:
https://www.fsf.org/licensing/zfs-and-linux

Privately, You Can Do As You Like​

The GNU GPL has no substantive requirements about what you do in private; the GPL conditions apply when you make the work available to others.3, 4
-- edit to clarify, so long as no CDDL code is "intermingled" with GPL code, it is perfectly ok to distriibute; if you CHOSE to compile ZFS support INTO the kernel, you can do that as you like as long as you dont distrubute it. Neither Proxmox, Ubuntu, or any of the other distros supporting ZFS has a ZFS enabled kernel.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!