New Server build xfs or ext4

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,673
426
83
If you run a single host and you have the suitable hardware, take a deeper look on ZFS.

- enough ram (ECC)
- SSDs, at least for ZIL cache device
 
Not interested in ZFS - tried it. My personal preference is still HW RAID with BBU. Never let me down.

Plus when I did do it on test server the FSYNCS were horrible compared to HW RAID with BBU.

And used RAID 10 for zfs with 6 x 1TB SSD disks - same as HW Raid.

May have been doing something wrong but not ready to change yet to something I haven't dived enough into yet :)
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,673
426
83
Not interested in ZFS - tried it. My personal preference is still HW RAID with BBU. Never let me down.
If you are not interested, why do you ask?

Plus when I did do it on test server the FSYNCS were horrible compared to HW RAID with BBU.

And used RAID 10 for zfs with 6 x 1TB SSD disks - same as HW Raid.

May have been doing something wrong but not ready to change yet to something I haven't dived enough into yet :)
fsyns/sec are just a simple benchmark. But I can get more than 5000 on small setups (zraid1 plus 1 ssd zil).

choosing ZFS give you a lot of advantages, performance in terms or speed is just ONE.
 

shantanu

Member
Mar 30, 2012
101
6
18
I would recommend using xfs only if you can guarantee that there would no abrupt reboots of your PVE host.
Otherwise go with ext4.

Regards,
Shantanu Gadgil
 
Dec 8, 2008
106
2
18
Hungary
grin.hu
I would recommend using xfs only if you can guarantee that there would no abrupt reboots of your PVE host.
This is a good advice for XFS versions in the year of 2000 and around. The block zeroing bug was fixed, like, a decade ago?

ext4 is slow. zfs is not for serious use (or is it in the kernel yet?). xfs is really nice and reliable. yes, even after serial crashing. ;-)
 
Dec 8, 2008
106
2
18
Hungary
grin.hu
You are trully expert, sir!
I wonder what data have you used to reach that conclusion? :)
On the second thought... no, I am not.

Is ZFS in the kernel yet or it's still a repeatedly-breaking separate module? I have been testing it from time to time, and have lost a bunch of test volumes in due course since around... 2010 or so; there was a period around 2013 I believe when the module was not compilable with the recent kernels due to various compiling bugs. For half a year. Obviously it's a great fun to have a filesystem which isn't in the kernel the machine uses. Still, I am sure you have offered your useful and very detailed opinion with the best intentions to help the readers of this topic, who are truly interested in ZFS, I'm sure you're sure, regardless of what the title says. I am keen to hear about how this helpful advice of yours helps with XFS, or the differentiation against EXT4, as you may not have noticed above. Oh, and thanks for sharing.
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,673
426
83
Proxmox VE kernels include ZFS since 3.4, for almost 2 years.
 
Dec 8, 2008
106
2
18
Hungary
grin.hu
Proxmox VE kernels include ZFS since 3.4, for almost 2 years.
Tom, please, do not descend to that level. The "kernel" is the kernel. "Proxmox kernel" is the [vanilla] kernel plus various external patches applied by proxmox. I asked whether the kernel contains ZFS now, and you have answered a question I haven't asked; if I understand correctly your answer was "no it still doesn't include it and we patch it every time". Am I right?

I'm sure you do not want to hear me telling you what's the problem with external patches, since I guess you're somehow familiar with the name "openvz", but if you insist I can detail you the possible problems. Mainly boiling down to the problem space described as "problems we cannot fix and cannot help with, sorry, you have to wait or change". If this haven't happened to you before doesn't mean it never will, and playing that game on a production cluster isn't funny anymore. (If it was we would still use OpenVZ since it was magnitudes better than LXC, only if its patch would apply anywhere recent, ever.)

And, as you may have noticed, the OP asked about XFS and EXT4, and not ZFS, and it still hasn't been answered. As for me, I am not interested in ZFS now, despite the great ideas it contains. I'm using Ceph for most of these features thus I'm not really missing them. And on rbd I would prefer using XFS, only neither CT creation nor backup seem to handle it well (or at all).
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,399
529
113
Tom, please, do not descend to that level. The "kernel" is the kernel. "Proxmox kernel" is the [vanilla] kernel plus various external patches applied by proxmox. I asked whether the kernel contains ZFS now, and you have answered a question I haven't asked; if I understand correctly your answer was "no it still doesn't include it and we patch it every time". Am I right?

I'm sure you do not want to hear me telling you what's the problem with external patches, since I guess you're somehow familiar with the name "openvz", but if you insist I can detail you the possible problems. Mainly boiling down to the problem space described as "problems we cannot fix and cannot help with, sorry, you have to wait or change". If this haven't happened to you before doesn't mean it never will, and playing that game on a production cluster isn't funny anymore. (If it was we would still use OpenVZ since it was magnitudes better than LXC, only if its patch would apply anywhere recent, ever.)

And, as you may have noticed, the OP asked about XFS and EXT4, and not ZFS, and it still hasn't been answered. As for me, I am not interested in ZFS now, despite the great ideas it contains. I'm using Ceph for most of these features thus I'm not really missing them. And on rbd I would prefer using XFS, only neither CT creation nor backup seem to handle it well (or at all).
just a minor nitpick: zfs is not a huge patch set on the kernel like openvz, but a kernel module. this might seem like a small difference to people not involved in kernel development / building, but it's actually vastly easier to maintain! it is no different then any of the other file system drivers in the kernel in that regard, except that it is maintained out of tree upstream. since our direct upstream kernel base (Ubuntu) also has ZFS compiled into their kernel, the latter is basically a non-issue (the module source is in-tree in Ubuntu's kernel sources).

to get back on topic: please file bug reports when you encounter issues when running container's with XFS instead of ext4. currently there are a lot of places where ext4 is kind of expected (either in principle, or for certain mount options), but this is not set in stone. it will probably never be the default choice (which also means that it might not be treated with the highest priority), but I don't see a reason to not offer it (or try to support it for most cases). if you are a developer, patches are of course also appreciated, but even just telling us where unexpected breakage exists helps ;)
 
  • Like
Reactions: grin
Dec 8, 2008
106
2
18
Hungary
grin.hu
just a minor nitpick: zfs is not a huge patch set on the kernel like openvz, but a kernel module. this might seem like a small difference to people not involved in kernel development / building, but it's actually vastly easier to maintain!
Well I have been working on a few places in the kernel in the past but what you say is correct. Still, ZFS module have been incompatible with released but new kernels for long periods of time (half a year in kernel incompatible changes is way too long for me) and it didn't make me quite ZFS-module-friendly. :)

it is no different then any of the other file system drivers in the kernel in that regard, except that it is maintained out of tree upstream.
I'm sure you know but for the dear readers: in tree maintainance means the kernel cannot get released until the module is updated for the kernel changes, or there won't be changes which break module compilation; out of tree means that when the kernel changes the module breaks and won't compile at all, and it keeps not compiling until the external maintainers get their stuff together. If they don't, can't or won't (I didn't care why in ZFS case since I was only testing) the filesystem will disappear or you'd be forced to use the old kernel (which in my case wasn't acceptable due to some driver updates).

since our direct upstream kernel base (Ubuntu) also has ZFS compiled into their kernel, the latter is basically a non-issue (the module source is in-tree in Ubuntu's kernel sources).
Sounds good enough, but I know ubuntu isn't the rosetta stone and they would choke on an incompatible change. :)
But apart from nitpicking: do you have data on peple using linux kernel ZFS for serious, long-term production use? I know several people using it for smaller projects and lots of people playing with it but haven't heard lots of critical use so far, everyone who love it say they still won't use it for mass production.

to get back on topic: please file bug reports when you encounter issues when running container's with XFS instead of ext4. currently there are a lot of places where ext4 is kind of expected (either in principle, or for certain mount options), but this is not set in stone. it will probably never be the default choice (which also means that it might not be treated with the highest priority), but I don't see a reason to not offer it (or try to support it for most cases). if you are a developer, patches are of course also appreciated, but even just telling us where unexpected breakage exists helps ;)
I am not sure it's a bug that I cannot create XFS containers, or, rather, the GUI and the cmdline tools neglects to ask what format of container do I want. I can mount the rbd and reformat it and prepare it manually but it's far from ideal. Would you like to me open a wishlist for that or you'd like us to create some XFS, break them, report breakage and when we're happy for a while should I submit that it' should be natively supported? :) Right now it's not that important that I would make you work on it, there are much more important problems to solve (like LXC online migration, advanced fencing, native ceph snapshots / backup / clone, ... And generic lxc and related gui stability).
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,399
529
113
I'm sure you know but for the dear readers: in tree maintainance means the kernel cannot get released until the module is updated for the kernel changes, or there won't be changes which break module compilation; out of tree means that when the kernel changes the module breaks and won't compile at all, and it keeps not compiling until the external maintainers get their stuff together. If they don't, can't or won't (I didn't care why in ZFS case since I was only testing) the filesystem will disappear or you'd be forced to use the old kernel (which in my case wasn't acceptable due to some driver updates).
like I said, in our case this is not really a problem as we don't run the most recent upstream kernel anyway. but still, the ZoL project has matured a lot and their kernel compat fixes are usually rather quick.

Sounds good enough, but I know ubuntu isn't the rosetta stone and they would choke on an incompatible change. :)
But apart from nitpicking: do you have data on peple using linux kernel ZFS for serious, long-term production use? I know several people using it for smaller projects and lots of people playing with it but haven't heard lots of critical use so far, everyone who love it say they still won't use it for mass production.
"data" is probably not appropriate here, but I know there are some users here on the forum that have bigger installations that they seem to enjoy, and following zfs-discuss@list.zfsonlinux.org also regularly shows bigger enterprise installations. but there are probably still more enterprise ZFS installations running on solaris or *BSD, simply because it has been available (and stable) for longer on those platforms.

I am not sure it's a bug that I cannot create XFS containers, or, rather, the GUI and the cmdline tools neglects to ask what format of container do I want. I can mount the rbd and reformat it and prepare it manually but it's far from ideal. Would you like to me open a wishlist for that or you'd like us to create some XFS, break them, report breakage and when we're happy for a while should I submit that it' should be natively supported? :) Right now it's not that important that I would make you work on it, there are much more important problems to solve (like LXC online migration, advanced fencing, native ceph snapshots / backup / clone, ... And generic lxc and related gui stability).
my rough draft would be to offer an advanced option for the mount points (i.e., not available on the GUI for now) that allows choosing a file system from a white list, defaulting to ext4. creating volumes and mounting them would need to check that option and decide on appropriate mount points.

so if you know of other stuff that breaks with XFS formatted volumes besides the backup bug, feel free to file bugs for those issues (I guess ACL and quota might be broken?). I'll probably catch a lot of them myself when testing anyway ;)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!