ZFS and EXT4 mixed?

bofh

Well-Known Member
Nov 7, 2017
136
17
58
44
Background,
About to rent new server, unfortunate limited setup so i cannot have a dedicaded boot drive.

So normaly i would go for GTP layout and zfs root install.
However problems with ZFS on root i resent months on some servers makes me think

Is there any argument to be made against boot/root on ext4 and zfs on second partition on each drive ?
 
Possible, also you can use ZFS only at partition, and also can use ZFS at inside of file. About my experience and Linus opinion, I do not use ZFS for production and at my projects, I will use ZFS later. But you can.
 
Possible, also you can use ZFS only at partition, and also can use ZFS at inside of file. About my experience and Linus opinion, I do not use ZFS for production and at my projects, I will use ZFS later. But you can.

i do use a lot of zfs on many many systems, in production.
however id prefer to have it only on data drives and have a seperated systemdrive.

i even use on some zfs for root but there i have kvm access to the servers and can quickly resolve any boot issues.
new serever in question doesnt have that option and rescue doesnt offer zfs out of the box, so everytime i have to resuce a root zfs its a bit of jumping trough hoops.



so my question remains if theres any argument against
gtp/ext4 for root/ zfs on data partition x 6 drives
vs
gtp/zfs for root/zfs on data x 6 drives (which would be default zfs setup in promox
 
new serever in question doesnt have that option and rescue doesnt offer zfs out of the box, so everytime i have to resuce a root zfs its a bit of jumping trough hoops.
given these restrictions, your idea is probably an okay one. Depending on the disk layout and RAID capabilities you will face different issues. ZFS on HW RAID is a bad idea, PVE only offers software raid via ZFS.

You would probably have to install debian first in an md raid to get redundancy for your system and then create the zfs pool with the wanted redundancy.
 
given these restrictions, your idea is probably an okay one. Depending on the disk layout and RAID capabilities you will face different issues. ZFS on HW RAID is a bad idea, PVE only offers software raid via ZFS.

You would probably have to install debian first in an md raid to get redundancy for your system and then create the zfs pool with the wanted redundancy.
no not an hardware raid
im aware of that issue

yea that was my plan. plain debian, mdraid for root partition as a raid10, then identical layout on all disks for zfs
then proxmox.

its jsut a plain storage box anyway, performance isnt even an issue, just datasecurity and size.


guess it will take a few years until we can have solid zfs on root
 
guess it will take a few years until we can have solid zfs on root
I am running ZFS on root for year now on my Arch based laptops/desktops and for quite a while on my Proxmox servers.
You should consider that whatever problem you read about here in the forum is a small part of the happily working 300.000 installations out there.
 
I am running ZFS on root for year now on my Arch based laptops/desktops and for quite a while on my Proxmox servers.
You should consider that whatever problem you read about here in the forum is a small part of the happily working 300.000 installations out there.

you should consider that
a- your anecdotual experience has no statistical significance
b- zfs isnt used for root as much asside proxmox, instead its used on datadrives with no os on it
c- zfs for linux is a recreated not the same as it is for solaris
d- its pretty much irrelevant how much installations suffer that issue, zero boot issues is the only number you wanna go for a server filesystem

just search this forums zfs grub error. we have these kind of issues a lot. i have running dozent and dozent of servers and run into issues as root fs regularly.
for example i run 10 identical machines with nvme drives in a DS in london. after an update 4 of these had a boot hang.
i had to rescue in, force import then export the pool in order to make it boot again.

worst thing of it that root cause is still unknown, we have some guesses but nothing concrete or reproduceable. and its hard as at this point theres no logs and very limited debugging options (basically none)
 
@narrateourale btw, im aware i posted a "noobie" question, that doesnt mean iam one.
but at times no matter how much experience you have, you might not shure about some basics and im not to proud to ask as there could be always a thing i dont know lurking in the dark.

so no i do not need basic answer that my question irrelevant, i know exactly why i would like do it this way
 
yea that was my plan. plain debian, mdraid for root partition as a raid10, then identical layout on all disks for zfs
then proxmox.

In limited disk setups, I ran that for years (until I switched completely to ZFS). On "more disk setups" I use also a hardware raid system drive and the drives (often via a separate HBA) as ZFS.

The only downside (if you really want to hear two minor ones) is that a EXT4+ZFS setup has two buffer caches: the arc for ZFS and the default linux buffer/filesystem/block cache - and of course the "complexity" of having two instead of one filesystem.
 
In limited disk setups, I ran that for years (until I switched completely to ZFS). On "more disk setups" I use also a hardware raid system drive and the drives (often via a separate HBA) as ZFS.

The only downside (if you really want to hear two minor ones) is that a EXT4+ZFS setup has two buffer caches: the arc for ZFS and the default linux buffer/filesystem/block cache - and of course the "complexity" of having two instead of one filesystem.

sorry with all the corona things going on i didnt check and put new project on the side.

yea thanks, iam aware of 2 buffers, but this is always the case also with your hardware raid boot drives.

thanks for your input, i was wondering if theres a hidden string im gonna fall over so its really reassuring someone running it like this.
at this point i see no other option anymore on those very limited rented servers. zfs isnt ready yet as a primary FS on linux sadly
probably never will, btrfs wil lbe the next no matter how much we dislike it :)
 
My personal experience without statistical significance is that I have been using ZFS on Ubuntu 18.04 for close to 2 years now on multiple servers.
Including on top of LUKS, and on the system drives. With RAID1 for root and RAIDZ2 for data.
We did extensive testing before we deployed it.
And I had 0 issues. It is rock stable.
https://github.com/openzfs/zfs/wiki/Ubuntu-18.04-Root-on-ZFS

im running it currently on 20ish servers, almost all of them had at least once this issue
running it on ubuntu doesnt say anything about running it on debian/proxmox with a different kernel
and testing goes only so far, since we dont really know the root cause we dont have a way to reproduce for testing
with many similar ish errors its even harder to collect relevant threats
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!