Horrible ZFS read Performance

cpzengel

Renowned Member
Nov 12, 2015
221
27
93
Aschaffenburg, Germany
zfs.rocks
Hi Guys,

since our last thread we figured out our serious problem and we are asking for a quick help

constellation

pve-4.4.12
intel hba

raid 10 with 6xconstellation 2tb, linked with 6gbit
raid 1 with 2xintel ssd 1,2tb, linked with 6gbit

any tests on four identical servers same

copy raid1 to raid10 with 200mb/sec
wait until writecache empty by zpool status
copy raid10 to raid1, same file with 2-10mb/sec

we have no idea left.
another customer has similar hardware, almost the same and has same parameters and no problems at all

please help!

chriz
 
Are this enterprise SSD's? What type of SSD? What type of HDD? Please post also the output of:
Code:
zpool list
zpool status
zfs list
zfs get all
pveperf /mountpoints_of_your_pools
smartctl -a /dev/sdX (per disctype)
 
Hi Fireon

thanks for your reply
zpool is healthy and up2date

we found that at one dataset cache was disabled, so i enabled it

smart is monitored and problems were at all servers

zvols are not fast enough to boot up windows 2016 server properly
on ssd its working
only with zvol and writeback cache everything is okay inside the vm, not good:(

ssds are enterprise intel
 
root@KGPM235:~# pveperf /rpool && pveperf /rpool2

# rpool is ssd raid1, rpool2 is raid10 with 6xconstellation hdd 7,2k)

CPU BOGOMIPS: 153270.72

REGEX/SECOND: 1856020

HD SIZE: 634.88 GB (rpool)

FSYNCS/SECOND: 2191.87

DNS EXT: 31.83 ms


CPU BOGOMIPS: 153270.72

REGEX/SECOND: 1860019

HD SIZE: 1502.51 GB (rpool2)

FSYNCS/SECOND: 1667.45

DNS EXT: 37.07 ms
 
zpool list

NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT

rpool 1.09T 455G 657G - 28% 40% 1.00x ONLINE -

rpool2 5.44T 3.39T 2.05T - 46% 62% 1.00x ONLINE -

root@KGPM235:~# zpool status

pool: rpool

state: ONLINE

scan: scrub repaired 0 in 1h52m with 0 errors on Sun Feb 12 02:16:35 2017

config:


NAME STATE READ WRITE CKSUM

rpool ONLINE 0 0 0

mirror-0 ONLINE 0 0 0

sde2 ONLINE 0 0 0

sda2 ONLINE 0 0 0


errors: No known data errors


pool: rpool2

state: ONLINE

scan: scrub repaired 0 in 18h12m with 0 errors on Sun Feb 12 18:36:17 2017

config:


NAME STATE READ WRITE CKSUM

rpool2 ONLINE 0 0 0

mirror-0 ONLINE 0 0 0

sdb2 ONLINE 0 0 0

sdc2 ONLINE 0 0 0

mirror-1 ONLINE 0 0 0

sdd ONLINE 0 0 0

sdf ONLINE 0 0 0

mirror-2 ONLINE 0 0 0

sdg ONLINE 0 0 0

sdh ONLINE 0 0 0

logs

ata-INTEL_SSDSC2BB120G6_BTWA552400VR120CGN-part1 ONLINE 0 0 0

cache

ata-INTEL_SSDSC2BB120G6_BTWA552400VR120CGN-part2 ONLINE 0 0 0


errors: No known data errors

root@KGPM235:~# zfs list

NAME USED AVAIL REFER MOUNTPOINT

rpool 459G 618G 144K /rpool

rpool/ROOT 46.7G 618G 96K /rpool/ROOT

rpool/ROOT/pve-1 45.5G 618G 28.7G /

rpool/ROOT/pve-2 1.25G 618G 1.25G /rpool/ROOT/pve-2

rpool/data 96K 618G 96K /rpool/data

rpool/swap 7.44G 623G 3.06G -

rpool/vm 404G 618G 96K /rpool/vm

rpool/vm/vm-101-disk-1 329G 618G 314G -

rpool/vm/vm-111-disk-1 36.0G 618G 31.4G -

rpool/vm/vm-112-disk-2 33.6G 618G 27.3G -

rpool/vm/vm-114-disk-2 5.43G 618G 4.91G -

rpool2 3.69T 1.58T 5.31G /rpool2

rpool2/PMConf235 12.6M 1.58T 5.63M /rpool2/PMConf235

rpool2/ROOT 27.2G 1.58T 96K /rpool2/ROOT

rpool2/ROOT/pve-2 27.2G 1.58T 22.0G /rpool2/ROOT/pve-2

rpool2/Replica 1.19T 1.58T 17.9G /rpool2/Replica

rpool2/Replica/AUDITMAN 28.7G 1.58T 28.7G /rpool2/Replica/AUDITMAN

rpool2/Replica/BYTSTORMAIL 335G 1.58T 278G /rpool2/Replica/BYTSTORMAIL

rpool2/Replica/KGS02 337G 1.58T 151G /rpool2/Replica/KGS02

rpool2/Replica/KGS06 105G 1.58T 34.8G /rpool2/Replica/KGS06

rpool2/Replica/KGS08 358G 1.58T 209G /rpool2/Replica/KGS08

rpool2/Replica/KGS09 28.4G 1.58T 96K /rpool2/Replica/KGS09

rpool2/Replica/PMConf236 13.1M 1.58T 5.68M /rpool2/Replica/PMConf236

rpool2/Replica/subvol-142-disk-1 4.04G 63.1G 901M /rpool2/Replica/subvol-142-disk-1

rpool2/swap 133G 1.69T 19.3G -

rpool2/swap-zfs 166G 1.58T 96K /rpool2/swap-zfs

rpool2/swap-zfs/vm-111-disk-1 33.1G 1.61T 67.8M -

rpool2/swap-zfs/vm-112-disk-1 60.0G 1.61T 25.9G -

rpool2/swap-zfs/vm-114-disk-1 73.4G 1.61T 30.8G -

rpool2/vms 2.17T 1.58T 104K /rpool2/vms

rpool2/vms/KGS01 1.50T 1.58T 1.10T /rpool2/vms/KGS01

rpool2/vms/KGS04 332G 1.58T 75.6G /rpool2/vms/KGS04

rpool2/vms/KGS05 89.9G 1.58T 89.9G /rpool2/vms/KGS05

rpool2/vms/SWAP 83.6G 1.58T 83.6G /rpool2/vms/SWAP

rpool2/vms/vm-111-disk-2 120M 1.58T 119M -

rpool2/vms/vm-112-disk-1 145G 1.67T 31.6G -

rpool2/vms/vm-114-disk-1 33.0G 1.58T 33.0G -
 
I can suggest you to use #arc_summary and #arcstat tools. With #arcstat you can see what is missing mostly. I use #atop to see HDD activity
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!