What is recommended now adays for proxmox and hw raid? XFS or ext4?
basically just gonna host KVM vms on this box.
basically just gonna host KVM vms on this box.
Not interested in ZFS - tried it. My personal preference is still HW RAID with BBU. Never let me down.
Plus when I did do it on test server the FSYNCS were horrible compared to HW RAID with BBU.
And used RAID 10 for zfs with 6 x 1TB SSD disks - same as HW Raid.
May have been doing something wrong but not ready to change yet to something I haven't dived enough into yet
This is a good advice for XFS versions in the year of 2000 and around. The block zeroing bug was fixed, like, a decade ago?I would recommend using xfs only if you can guarantee that there would no abrupt reboots of your PVE host.
You are trully expert, sir!...zfs is not for serious use...
I wonder what data have you used to reach that conclusion?You are trully expert, sir!
Proxmox VE kernels include ZFS since 3.4, for almost 2 years.
I'm a big fan of xfs, running on my hosts && guest fs with xfs. (around 1500vms).
I never had corruption since 10years
Tom, please, do not descend to that level. The "kernel" is the kernel. "Proxmox kernel" is the [vanilla] kernel plus various external patches applied by proxmox. I asked whether the kernel contains ZFS now, and you have answered a question I haven't asked; if I understand correctly your answer was "no it still doesn't include it and we patch it every time". Am I right?
I'm sure you do not want to hear me telling you what's the problem with external patches, since I guess you're somehow familiar with the name "openvz", but if you insist I can detail you the possible problems. Mainly boiling down to the problem space described as "problems we cannot fix and cannot help with, sorry, you have to wait or change". If this haven't happened to you before doesn't mean it never will, and playing that game on a production cluster isn't funny anymore. (If it was we would still use OpenVZ since it was magnitudes better than LXC, only if its patch would apply anywhere recent, ever.)
And, as you may have noticed, the OP asked about XFS and EXT4, and not ZFS, and it still hasn't been answered. As for me, I am not interested in ZFS now, despite the great ideas it contains. I'm using Ceph for most of these features thus I'm not really missing them. And on rbd I would prefer using XFS, only neither CT creation nor backup seem to handle it well (or at all).
Well I have been working on a few places in the kernel in the past but what you say is correct. Still, ZFS module have been incompatible with released but new kernels for long periods of time (half a year in kernel incompatible changes is way too long for me) and it didn't make me quite ZFS-module-friendly.just a minor nitpick: zfs is not a huge patch set on the kernel like openvz, but a kernel module. this might seem like a small difference to people not involved in kernel development / building, but it's actually vastly easier to maintain!
I'm sure you know but for the dear readers: in tree maintainance means the kernel cannot get released until the module is updated for the kernel changes, or there won't be changes which break module compilation; out of tree means that when the kernel changes the module breaks and won't compile at all, and it keeps not compiling until the external maintainers get their stuff together. If they don't, can't or won't (I didn't care why in ZFS case since I was only testing) the filesystem will disappear or you'd be forced to use the old kernel (which in my case wasn't acceptable due to some driver updates).it is no different then any of the other file system drivers in the kernel in that regard, except that it is maintained out of tree upstream.
Sounds good enough, but I know ubuntu isn't the rosetta stone and they would choke on an incompatible change.since our direct upstream kernel base (Ubuntu) also has ZFS compiled into their kernel, the latter is basically a non-issue (the module source is in-tree in Ubuntu's kernel sources).
to get back on topic: please file bug reports when you encounter issues when running container's with XFS instead of ext4. currently there are a lot of places where ext4 is kind of expected (either in principle, or for certain mount options), but this is not set in stone. it will probably never be the default choice (which also means that it might not be treated with the highest priority), but I don't see a reason to not offer it (or try to support it for most cases). if you are a developer, patches are of course also appreciated, but even just telling us where unexpected breakage exists helps
I'm sure you know but for the dear readers: in tree maintainance means the kernel cannot get released until the module is updated for the kernel changes, or there won't be changes which break module compilation; out of tree means that when the kernel changes the module breaks and won't compile at all, and it keeps not compiling until the external maintainers get their stuff together. If they don't, can't or won't (I didn't care why in ZFS case since I was only testing) the filesystem will disappear or you'd be forced to use the old kernel (which in my case wasn't acceptable due to some driver updates).
Sounds good enough, but I know ubuntu isn't the rosetta stone and they would choke on an incompatible change.
But apart from nitpicking: do you have data on peple using linux kernel ZFS for serious, long-term production use? I know several people using it for smaller projects and lots of people playing with it but haven't heard lots of critical use so far, everyone who love it say they still won't use it for mass production.
I am not sure it's a bug that I cannot create XFS containers, or, rather, the GUI and the cmdline tools neglects to ask what format of container do I want. I can mount the rbd and reformat it and prepare it manually but it's far from ideal. Would you like to me open a wishlist for that or you'd like us to create some XFS, break them, report breakage and when we're happy for a while should I submit that it' should be natively supported? Right now it's not that important that I would make you work on it, there are much more important problems to solve (like LXC online migration, advanced fencing, native ceph snapshots / backup / clone, ... And generic lxc and related gui stability).
Interesting to read it: lot have changed, something not at all.Hello, after four years the software has changed a lot.
If you had to answer the same question today in 2020, what file system would you use for the system installation and for the VMs?
XFS or EXT4 or ZFS.
Considering a typical installation. One ssd for boot and VM and one hdd for backups. In a single node configuration.