New Server build xfs or ext4

sahostking

Renowned Member
What is recommended now adays for proxmox and hw raid? XFS or ext4?

basically just gonna host KVM vms on this box.
 
If you run a single host and you have the suitable hardware, take a deeper look on ZFS.

- enough ram (ECC)
- SSDs, at least for ZIL cache device
 
Not interested in ZFS - tried it. My personal preference is still HW RAID with BBU. Never let me down.

Plus when I did do it on test server the FSYNCS were horrible compared to HW RAID with BBU.

And used RAID 10 for zfs with 6 x 1TB SSD disks - same as HW Raid.

May have been doing something wrong but not ready to change yet to something I haven't dived enough into yet :)
 
  • Like
Reactions: fv1
Not interested in ZFS - tried it. My personal preference is still HW RAID with BBU. Never let me down.

If you are not interested, why do you ask?

Plus when I did do it on test server the FSYNCS were horrible compared to HW RAID with BBU.

And used RAID 10 for zfs with 6 x 1TB SSD disks - same as HW Raid.

May have been doing something wrong but not ready to change yet to something I haven't dived enough into yet :)

fsyns/sec are just a simple benchmark. But I can get more than 5000 on small setups (zraid1 plus 1 ssd zil).

choosing ZFS give you a lot of advantages, performance in terms or speed is just ONE.
 
I would recommend using xfs only if you can guarantee that there would no abrupt reboots of your PVE host.
Otherwise go with ext4.

Regards,
Shantanu Gadgil
 
I would recommend using xfs only if you can guarantee that there would no abrupt reboots of your PVE host.
This is a good advice for XFS versions in the year of 2000 and around. The block zeroing bug was fixed, like, a decade ago?

ext4 is slow. zfs is not for serious use (or is it in the kernel yet?). xfs is really nice and reliable. yes, even after serial crashing. ;-)
 
You are trully expert, sir!
I wonder what data have you used to reach that conclusion? :)
On the second thought... no, I am not.

Is ZFS in the kernel yet or it's still a repeatedly-breaking separate module? I have been testing it from time to time, and have lost a bunch of test volumes in due course since around... 2010 or so; there was a period around 2013 I believe when the module was not compilable with the recent kernels due to various compiling bugs. For half a year. Obviously it's a great fun to have a filesystem which isn't in the kernel the machine uses. Still, I am sure you have offered your useful and very detailed opinion with the best intentions to help the readers of this topic, who are truly interested in ZFS, I'm sure you're sure, regardless of what the title says. I am keen to hear about how this helpful advice of yours helps with XFS, or the differentiation against EXT4, as you may not have noticed above. Oh, and thanks for sharing.
 
Proxmox VE kernels include ZFS since 3.4, for almost 2 years.
 
Proxmox VE kernels include ZFS since 3.4, for almost 2 years.

Tom, please, do not descend to that level. The "kernel" is the kernel. "Proxmox kernel" is the [vanilla] kernel plus various external patches applied by proxmox. I asked whether the kernel contains ZFS now, and you have answered a question I haven't asked; if I understand correctly your answer was "no it still doesn't include it and we patch it every time". Am I right?

I'm sure you do not want to hear me telling you what's the problem with external patches, since I guess you're somehow familiar with the name "openvz", but if you insist I can detail you the possible problems. Mainly boiling down to the problem space described as "problems we cannot fix and cannot help with, sorry, you have to wait or change". If this haven't happened to you before doesn't mean it never will, and playing that game on a production cluster isn't funny anymore. (If it was we would still use OpenVZ since it was magnitudes better than LXC, only if its patch would apply anywhere recent, ever.)

And, as you may have noticed, the OP asked about XFS and EXT4, and not ZFS, and it still hasn't been answered. As for me, I am not interested in ZFS now, despite the great ideas it contains. I'm using Ceph for most of these features thus I'm not really missing them. And on rbd I would prefer using XFS, only neither CT creation nor backup seem to handle it well (or at all).
 
Tom, please, do not descend to that level. The "kernel" is the kernel. "Proxmox kernel" is the [vanilla] kernel plus various external patches applied by proxmox. I asked whether the kernel contains ZFS now, and you have answered a question I haven't asked; if I understand correctly your answer was "no it still doesn't include it and we patch it every time". Am I right?

I'm sure you do not want to hear me telling you what's the problem with external patches, since I guess you're somehow familiar with the name "openvz", but if you insist I can detail you the possible problems. Mainly boiling down to the problem space described as "problems we cannot fix and cannot help with, sorry, you have to wait or change". If this haven't happened to you before doesn't mean it never will, and playing that game on a production cluster isn't funny anymore. (If it was we would still use OpenVZ since it was magnitudes better than LXC, only if its patch would apply anywhere recent, ever.)

And, as you may have noticed, the OP asked about XFS and EXT4, and not ZFS, and it still hasn't been answered. As for me, I am not interested in ZFS now, despite the great ideas it contains. I'm using Ceph for most of these features thus I'm not really missing them. And on rbd I would prefer using XFS, only neither CT creation nor backup seem to handle it well (or at all).

just a minor nitpick: zfs is not a huge patch set on the kernel like openvz, but a kernel module. this might seem like a small difference to people not involved in kernel development / building, but it's actually vastly easier to maintain! it is no different then any of the other file system drivers in the kernel in that regard, except that it is maintained out of tree upstream. since our direct upstream kernel base (Ubuntu) also has ZFS compiled into their kernel, the latter is basically a non-issue (the module source is in-tree in Ubuntu's kernel sources).

to get back on topic: please file bug reports when you encounter issues when running container's with XFS instead of ext4. currently there are a lot of places where ext4 is kind of expected (either in principle, or for certain mount options), but this is not set in stone. it will probably never be the default choice (which also means that it might not be treated with the highest priority), but I don't see a reason to not offer it (or try to support it for most cases). if you are a developer, patches are of course also appreciated, but even just telling us where unexpected breakage exists helps ;)
 
  • Like
Reactions: Roopee and grin
just a minor nitpick: zfs is not a huge patch set on the kernel like openvz, but a kernel module. this might seem like a small difference to people not involved in kernel development / building, but it's actually vastly easier to maintain!
Well I have been working on a few places in the kernel in the past but what you say is correct. Still, ZFS module have been incompatible with released but new kernels for long periods of time (half a year in kernel incompatible changes is way too long for me) and it didn't make me quite ZFS-module-friendly. :)

it is no different then any of the other file system drivers in the kernel in that regard, except that it is maintained out of tree upstream.
I'm sure you know but for the dear readers: in tree maintainance means the kernel cannot get released until the module is updated for the kernel changes, or there won't be changes which break module compilation; out of tree means that when the kernel changes the module breaks and won't compile at all, and it keeps not compiling until the external maintainers get their stuff together. If they don't, can't or won't (I didn't care why in ZFS case since I was only testing) the filesystem will disappear or you'd be forced to use the old kernel (which in my case wasn't acceptable due to some driver updates).

since our direct upstream kernel base (Ubuntu) also has ZFS compiled into their kernel, the latter is basically a non-issue (the module source is in-tree in Ubuntu's kernel sources).
Sounds good enough, but I know ubuntu isn't the rosetta stone and they would choke on an incompatible change. :)
But apart from nitpicking: do you have data on peple using linux kernel ZFS for serious, long-term production use? I know several people using it for smaller projects and lots of people playing with it but haven't heard lots of critical use so far, everyone who love it say they still won't use it for mass production.

to get back on topic: please file bug reports when you encounter issues when running container's with XFS instead of ext4. currently there are a lot of places where ext4 is kind of expected (either in principle, or for certain mount options), but this is not set in stone. it will probably never be the default choice (which also means that it might not be treated with the highest priority), but I don't see a reason to not offer it (or try to support it for most cases). if you are a developer, patches are of course also appreciated, but even just telling us where unexpected breakage exists helps ;)

I am not sure it's a bug that I cannot create XFS containers, or, rather, the GUI and the cmdline tools neglects to ask what format of container do I want. I can mount the rbd and reformat it and prepare it manually but it's far from ideal. Would you like to me open a wishlist for that or you'd like us to create some XFS, break them, report breakage and when we're happy for a while should I submit that it' should be natively supported? :) Right now it's not that important that I would make you work on it, there are much more important problems to solve (like LXC online migration, advanced fencing, native ceph snapshots / backup / clone, ... And generic lxc and related gui stability).
 
I'm sure you know but for the dear readers: in tree maintainance means the kernel cannot get released until the module is updated for the kernel changes, or there won't be changes which break module compilation; out of tree means that when the kernel changes the module breaks and won't compile at all, and it keeps not compiling until the external maintainers get their stuff together. If they don't, can't or won't (I didn't care why in ZFS case since I was only testing) the filesystem will disappear or you'd be forced to use the old kernel (which in my case wasn't acceptable due to some driver updates).

like I said, in our case this is not really a problem as we don't run the most recent upstream kernel anyway. but still, the ZoL project has matured a lot and their kernel compat fixes are usually rather quick.

Sounds good enough, but I know ubuntu isn't the rosetta stone and they would choke on an incompatible change. :)
But apart from nitpicking: do you have data on peple using linux kernel ZFS for serious, long-term production use? I know several people using it for smaller projects and lots of people playing with it but haven't heard lots of critical use so far, everyone who love it say they still won't use it for mass production.

"data" is probably not appropriate here, but I know there are some users here on the forum that have bigger installations that they seem to enjoy, and following zfs-discuss@list.zfsonlinux.org also regularly shows bigger enterprise installations. but there are probably still more enterprise ZFS installations running on solaris or *BSD, simply because it has been available (and stable) for longer on those platforms.

I am not sure it's a bug that I cannot create XFS containers, or, rather, the GUI and the cmdline tools neglects to ask what format of container do I want. I can mount the rbd and reformat it and prepare it manually but it's far from ideal. Would you like to me open a wishlist for that or you'd like us to create some XFS, break them, report breakage and when we're happy for a while should I submit that it' should be natively supported? :) Right now it's not that important that I would make you work on it, there are much more important problems to solve (like LXC online migration, advanced fencing, native ceph snapshots / backup / clone, ... And generic lxc and related gui stability).

my rough draft would be to offer an advanced option for the mount points (i.e., not available on the GUI for now) that allows choosing a file system from a white list, defaulting to ext4. creating volumes and mounting them would need to check that option and decide on appropriate mount points.

so if you know of other stuff that breaks with XFS formatted volumes besides the backup bug, feel free to file bugs for those issues (I guess ACL and quota might be broken?). I'll probably catch a lot of them myself when testing anyway ;)
 
Hello, after four years the software has changed a lot.
If you had to answer the same question today in 2020, what file system would you use for the system installation and for the VMs?
XFS or EXT4 or ZFS.

Considering a typical installation. One ssd for boot and VM and one hdd for backups. In a single node configuration.



By the way, very interesting what Tom, Fabian and Grin said once.

Regards.
 

Attachments

  • 1598727689883.png
    1598727689883.png
    312 bytes · Views: 23
Hello, after four years the software has changed a lot.
If you had to answer the same question today in 2020, what file system would you use for the system installation and for the VMs?
XFS or EXT4 or ZFS.

Considering a typical installation. One ssd for boot and VM and one hdd for backups. In a single node configuration.
Interesting to read it: lot have changed, something not at all.

My personal opinion: I still try to avoid ZFS since it still has that weird aura that some bugs are unfixed for years now, and it's still an external kernel sidecar which still occasionally became uncompilable and have to wait for weeks to be fixed.
However this is not relevant in the context of Proxmox since the Proxmox guys actually pre-package and pre-compile it for you, so these problems are completely invisible to Proxmox users. [The bugs affect them, of course, but they are not usually serious.]

After various tests it seems that ZFS is pretty good when someone has slow magnetic disks and a few fast ssd to speed it up, since they use cache in a pretty integrated way, real easy to setup, and reorganise (attach/detach cache volume), and there are good metrics available to see whether it's working or not. That is in contrast with, say, lvm cache which turns out to be a burden, no matter what magic I try to cast on it.
RaidZ is another possible reason to use ZFS but in the age of fast hardware raid this may or may not be relevant. Compression may be nice, usually it's nothing really relevant, but convenient in one side and scary recovery-wise. ;-)
Dedupliction is a hoax. ;-)

Still, I am exclusively use XFS where there is no diverse media under the system (SATA/SAS only, or SSD only), and had no real problem for decades, since it's simple and it's fast. I only use ext4 when someone was clueless to install XFS. ;-) Proxmox install handles it well, can install XFS from the start.

(To be honest we also extensively use ceph in clustered environment [rbd mapped or not, cephfs and object store a bit less] and it is a dream come true, reliable as hell (and believe me: hell seems to be the only sure point in this universe ;-)). But that's a different topic, and I use XFS on mapped rbd whenever I don't forget about it, or the automation does.)
 
  • Like
Reactions: user984984
Thanks for the feedback. I'm relatively new to this, I don't have a year of experience. Now I am taking some confidence and experimenting some things.
I did something with Ceph and I like it because they replicate instantly.
But I don't understand that about ceph rbd (what is RBD?)
 
  • Like
Reactions: jvandenbroek

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!