Again horrible ZFS Performance
4 x Constellation in Raid 10
Only getting 150-200MB write on DD, in VM only 10MB/sec (Virtio, Win10, no cache, also tested qcow with writeback on ZFS Dataset)
Log Device does not seem to be used at all!
VMs causing massive load up to 25! (2 x 4 Cores)
Already tried
Perhaps its about the Fragmentation
Any advice welcome!
NAME STATE READ WRITE CKSUM
Raid10 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000c5006361d943 ONLINE 0 0 0
scsi-35000c500636267bb ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x5000c500634ea057 ONLINE 0 0 0
wwn-0x5000c5006360a7eb ONLINE 0 0 0
logs
nvme0n1p1 ONLINE 0 0 0
cache
nvme0n1p2 ONLINE 0 0 0
4 x Constellation in Raid 10
Only getting 150-200MB write on DD, in VM only 10MB/sec (Virtio, Win10, no cache, also tested qcow with writeback on ZFS Dataset)
Log Device does not seem to be used at all!
VMs causing massive load up to 25! (2 x 4 Cores)
Already tried
- Offline each HDD
- Tested Smart
- Upgrade zpool and zfs
- Using NVMe for Log an Cache
- Disabling Compression
- Benchmark with dd writing zeros to file
- Performance in same System with 4 x SSD up to 800MB write, no load
Perhaps its about the Fragmentation
Any advice welcome!
NAME STATE READ WRITE CKSUM
Raid10 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000c5006361d943 ONLINE 0 0 0
scsi-35000c500636267bb ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x5000c500634ea057 ONLINE 0 0 0
wwn-0x5000c5006360a7eb ONLINE 0 0 0
logs
nvme0n1p1 ONLINE 0 0 0
cache
nvme0n1p2 ONLINE 0 0 0
root@pve252:~# zpool get all Raid10
NAME PROPERTY VALUE SOURCE
Raid10 size 3.62T -
Raid10 capacity 61% -
Raid10 altroot - default
Raid10 health ONLINE -
Raid10 guid 10200424180081588444 -
Raid10 version - default
Raid10 bootfs - default
Raid10 delegation on default
Raid10 autoreplace off default
Raid10 cachefile - default
Raid10 failmode wait default
Raid10 listsnapshots off default
Raid10 autoexpand off default
Raid10 dedupditto 0 default
Raid10 dedupratio 1.00x -
Raid10 free 1.39T -
Raid10 allocated 2.23T -
Raid10 readonly off -
Raid10 ashift 12 local
Raid10 comment - default
Raid10 expandsize - -
Raid10 freeing 0 -
Raid10 fragmentation 45% -
Raid10 leaked 0 -
Raid10 multihost off default
Raid10 feature@async_destroy enabled local
Raid10 feature@empty_bpobj active local
Raid10 feature@lz4_compress active local
Raid10 feature@multi_vdev_crash_dump enabled local
Raid10 feature@spacemap_histogram active local
Raid10 feature@enabled_txg active local
Raid10 feature@hole_birth active local
Raid10 feature@extensible_dataset active local
Raid10 feature@embedded_data active local
Raid10 feature@bookmarks enabled local
Raid10 feature@filesystem_limits enabled local
Raid10 feature@large_blocks enabled local
Raid10 feature@large_dnode enabled local
Raid10 feature@sha512 enabled local
Raid10 feature@skein enabled local
Raid10 feature@edonr enabled local
Raid10 feature@userobj_accounting active
NAME PROPERTY VALUE SOURCE
Raid10 size 3.62T -
Raid10 capacity 61% -
Raid10 altroot - default
Raid10 health ONLINE -
Raid10 guid 10200424180081588444 -
Raid10 version - default
Raid10 bootfs - default
Raid10 delegation on default
Raid10 autoreplace off default
Raid10 cachefile - default
Raid10 failmode wait default
Raid10 listsnapshots off default
Raid10 autoexpand off default
Raid10 dedupditto 0 default
Raid10 dedupratio 1.00x -
Raid10 free 1.39T -
Raid10 allocated 2.23T -
Raid10 readonly off -
Raid10 ashift 12 local
Raid10 comment - default
Raid10 expandsize - -
Raid10 freeing 0 -
Raid10 fragmentation 45% -
Raid10 leaked 0 -
Raid10 multihost off default
Raid10 feature@async_destroy enabled local
Raid10 feature@empty_bpobj active local
Raid10 feature@lz4_compress active local
Raid10 feature@multi_vdev_crash_dump enabled local
Raid10 feature@spacemap_histogram active local
Raid10 feature@enabled_txg active local
Raid10 feature@hole_birth active local
Raid10 feature@extensible_dataset active local
Raid10 feature@embedded_data active local
Raid10 feature@bookmarks enabled local
Raid10 feature@filesystem_limits enabled local
Raid10 feature@large_blocks enabled local
Raid10 feature@large_dnode enabled local
Raid10 feature@sha512 enabled local
Raid10 feature@skein enabled local
Raid10 feature@edonr enabled local
Raid10 feature@userobj_accounting active
proxmox-ve: 5.1-32 (running kernel: 4.13.13-2-pve)
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.13-2-pve: 4.13.13-32
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9
pve-manager: 5.1-41 (running version: 5.1-41/0b958203)
pve-kernel-4.13.13-2-pve: 4.13.13-32
libpve-http-server-perl: 2.0-8
lvm2: 2.02.168-pve6
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-19
qemu-server: 5.0-18
pve-firmware: 2.0-3
libpve-common-perl: 5.0-25
libpve-guest-common-perl: 2.0-14
libpve-access-control: 5.0-7
libpve-storage-perl: 5.0-17
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-3
pve-docs: 5.1-12
pve-qemu-kvm: 2.9.1-5
pve-container: 2.0-18
pve-firewall: 3.0-5
pve-ha-manager: 2.0-4
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.1.1-2
lxcfs: 2.0.8-1
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.7.3-pve1~bpo9