Lost all data on ZFS RAID10

Just went to see some of the most recent symptoms one can experience with the versatile filesystem, fairly nice is recent:

The HDD volumes can significantly affect the write performance of SSD volumes on the same server node

It's really by design...


Well, anyhow, this is a filesystem (and volume manager) that could not do reflinks till last year, after all:

- COW cp (--reflink) support

TRIM took till 2019 to get working:
- Add TRIM support

It took 10 years (since its inception) to fix its own death by SWAP:

- Support swap on zvol

And then it keeps coming back:

- Swap deadlock in 0.7.9


Then especially the unlucky ZVOLs:

- VOL data corruption with zvol_use_blk_mq=1

- [Performance] Extreme performance penalty, holdups and write amplification when writing to ZVOLs

Yes, solving this caused the above one; also, some comments are pure gold there. Before and after the fix:
Code:
Sequential dd to one zvol, 8k volblocksize, no O_DIRECT:

    legacy submit_bio()     292MB/s write  453MB/s read
    this commit             453MB/s write  885MB/s read

- Discard operations on empty zvols are "slow"


... I won't be even going further back, it's just what I myself remember when it comes to attempting having it as backend for a HV.

And every now and then there's mysteries like:


- Better performance with O_DIRECT on zvols

- ZVOL caused machine freeze

(yes, PVE bring up these, might be other kernel-related issues, but that's what one gets for running "tainted" kernels)


@ubu And why do you prefer it?
 
Last edited:
I find it bizzare that PVE install puts BTRFS on some "experimental" pedestal,
the dev team simply didnt put as much effort into the tooling and integration; consequently its just not that mature. It doesnt mean there is no maturity to the underlying file system. FWIW I have a single node deployment with BTRFS which seems to work fine but I dont think I want to use it in anger.
Can you elaborate?
I'm sure he could. others (including myself) have all over the forum. You seem to have some vendetta around this choice by the developers; just because you dont agree with the reasons for its (zfs) use and preferential status doesnt mean those reasons arent there. Why is this such a passion for you?
 
the dev team simply didnt put as much effort into the tooling and integration; consequently its just not that mature.

Fair enough, I was mostly referring to the fact that ZFS by itself could be considered experimental. Not really thinking of the tooling part. Then again, if the OP is not after the features (e.g. replication), what difference does it make to him (the better tested tooling?).

I'm sure he could. others (including myself) have all over the forum. You seem to have some vendetta around this choice by the developers;

When OP starts a thread like this, then what's wrong about asking him why he chose that particular filesytem? Just because it's popular on the forum? I don't remember by heart what's default in ISO installer, probably not even ZFS, so it's not about me being upset about the choice of ZFS for e.g. replication support.

just because you dont agree with the reasons for its (zfs) use and preferential status doesnt mean those reasons arent there. Why is this such a passion for you?

Listing a couple of factual links to trackers is passionate nowadays?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!