The speed depends on your pool configuration.
# zfs list -t all | wc -l
26496
# time zfs list -t all
real 0m13.233s
user 0m1.281s
sys 0m11.137s
ZFS pool is from 2 X raidz2 of 6 HDD ( total 12 )
before was mirror from 2 disks and raidz from 3 disks. Both was slow.
Don`t know why but this bug https://www.cvedetails.com/cve/CVE-2019-11815/ is serious on my opinion. The fix is small and already fixed https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=cb66ddd156203daefb8d71158036b27b0e2caf63 but the linux distribution community...
LUKS depends on CPU. ZFS depends on HDD/SDD slowest device speed.
As of RaidZ2 you need 6 disks otherwise you will get allocation overhead. https://forum.proxmox.com/threads/slow-io-and-high-io-waits.37422/#post-184974
If you see no IO penalty inside VM - then don't worry to much.
ZFS don't care ports position. But it 'may' happens with error of cache pool data file. But I don't think it will happen. In that case import pool
#zpool import -d /dev/disk/by-id/ pool_name
and its done.
I suggest you to set sync=disabled to avoid double write. Single disk is single disk. ZFS don't have IO process priority.
What you have to know
1. The data goes like this: Program -> ZFS write cache (not ZIL) -> disk
2. ZFS flush data from write cache to disk every ~5 sec
3. Then the write cache...
What is your ZFS pool configuration? On heady load (depends on how slow setup is) pool can be very unacceptable. For SSH use another pool for server OS to avoid IO waiting.
For ZFS don't forget SYNC writes are written to ZFS LOG device. If you don't have external ZFS LOG device then the pool become as LOG device too. Thats mean double writes.
If SYNC is important and you want good write performance then add single good SATA/NVME SSD as ZFS pool LOG device.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.