if I should consider moving back to ext4
You will lose a lot functionality,
some of them may be relevant for you:
Integrity
ZFS assures **integrity**. It will deliver the *same* data when you read it as was written at some point of time in the past. To assure this a checksum is calculated (and written to disk) when you write the actual data. When you _read_ the data the same checksum is again re-calculated and only if both are the same it is okay to deliver it to the reading process.
Most other “classic” filesystems do not do this kind of check and just do deliver the data coming from the disk instead.
For
most on-disk-problems a “read-error” will occur, avoiding to hand over damaged data. These days this can happen every 10^15th block of data read - it is called an “URE” (“Unrecoverable Read Error”). A _much higher_ number of blocks needs to be read to deliver actually different/wrong/damaged data *without an error message*. On the other hand this is not the whole story: errors may get introduced not only on the platters or in an SSD-cell but also on the physical wire, the physical connectors, on the motherboard’s data bus, in RAM or inside the CPU. So yeah, to receive damaged data in your application is _not_ impossible! ZFS will detect this with a high probability.
Snapshots
The concept of “copy-on-write” (“CoW”) allows to implement technically **cheap snapshots** while most other filesystems do not have this capability. “LVM-thick” does not offer snapshot at all and LVM-thin has other drawbacks. Directory storages allow for .qcow-files, but they introduce a whole new layer of complexity compared to raw block devices (“ZVOL”) used for virtual disk.
Compression
ZFS allows for transparent and cheap **compression** - you just can store more data on the same disk. A long time ago the CPU had to compress data in software. Since several years all CPUs do that “in hardware” by specific internal instructions - this is usually so fast that you won’t notice any delay.
ZFS **scales**! You can always add an additional vdev at any time to grow capacity. Some people were missing “in-vdev”-“raidz-expansion” which is (probably) coming to PVE in 2025.
Using ZFS you can combine old rotating rust and speedy SSD/NVME by utilizing a “**Special Device**”. This allows using ZFS in some(!) more use cases which are impossible to create otherwise. That SD will store metadata (and _possibly_ some more “small blocks”). This speeds up some operations, depending on the application. Usually the resulting pool could be twice as fast because we have a higher number vdevs now --> the need of physical head movements is _drastically_ reduced. And because the SD may be really small (below 1% of raw capacity) this is a cheap and recommended optimization option. Use fast devices in a mirror for this - if it dies the pool is completely gone. (If your data is RaidZ2 use a _triple_ mirror!)
All of the above comes with a
price tag, leading to some counter arguments:
For every single write-command additional metadata and ZIL operations are required. ZFS writes more data more frequent than other filesystems. More writes means “slower” = not positively recognized by users. This slows down especially “sync-writes”. _Usual_ “async”-data is quickly buffered in Ram for up to 5 seconds before it is written to disk.
To compensate that “slowing down”-aspect it is _highly_ recommended to
use “Enterprise”-class devices with “Power-loss-Protection” (“PLP”) instead of cheap “Consumer”-class ones. Unfortunately these devices are much more expensive...