Just went to see some of the most recent symptoms one can experience with the versatile filesystem, fairly nice is recent:
Well, anyhow, this is a filesystem (and volume manager) that could not do reflinks till last year, after all:
- Add TRIM support
It took 10 years (since its inception) to fix its own death by SWAP:
Then especially the unlucky ZVOLs:
... I won't be even going further back, it's just what I myself remember when it comes to attempting having it as backend for a HV.
And every now and then there's mysteries like:
@ubu And why do you prefer it?
The HDD volumes can significantly affect the write performance of SSD volumes on the same server node
It's really by design...Well, anyhow, this is a filesystem (and volume manager) that could not do reflinks till last year, after all:
- COW cp (--reflink) support
TRIM took till 2019 to get working:- Add TRIM support
It took 10 years (since its inception) to fix its own death by SWAP:
- Support swap on zvol
And then it keeps coming back:- Swap deadlock in 0.7.9
Then especially the unlucky ZVOLs:
- VOL data corruption with zvol_use_blk_mq=1
- [Performance] Extreme performance penalty, holdups and write amplification when writing to ZVOLs
Yes, solving this caused the above one; also, some comments are pure gold there. Before and after the fix:
Code:
Sequential dd to one zvol, 8k volblocksize, no O_DIRECT:
legacy submit_bio() 292MB/s write 453MB/s read
this commit 453MB/s write 885MB/s read
- Discard operations on empty zvols are "slow"
... I won't be even going further back, it's just what I myself remember when it comes to attempting having it as backend for a HV.
And every now and then there's mysteries like:
- Better performance with O_DIRECT on zvols
- ZVOL caused machine freeze
(yes, PVE bring up these, might be other kernel-related issues, but that's what one gets for running "tainted" kernels)@ubu And why do you prefer it?
Last edited: