Hello,
I think I have a problem with ZFS performance, which is much below what I see advertised on the forum and considering the hardware I'm using. Unfortunately I cannot see the issue, so I hope that someone will be smarter than me.
The problem is the IOPS I can get from a ZFS pool with 6 1TB sata disks and 2 SSD for ZIL and cache. I'm nowhere near the expected 600 iops from the disks alone and the 2000-3000 when using the zil. I actually get ~60 with the ZIL turned off and ~160 with the ZIL on.
The pool is configured as a 3 vdevs of 2 disks in a mirror and two SSD partitioned in a mirrored ZIL + 2xcache partitions:
If I disable sync on the test partition (zfs set sync=disabled rpool/t) I get an astonishing 20000 IOPS which tells me that the ZIL device is not working at all.
PVE: pve-manager/3.4-6/102d4547 (running kernel: 2.6.32-39-pve)
ZFS: Loaded module v0.6.4.1-1, ZFS pool version 5000, ZFS filesystem version 5
This is the ZFS config
And here's the iostat of zpool. As you see cache seems unused too:
Any hints?
I think I have a problem with ZFS performance, which is much below what I see advertised on the forum and considering the hardware I'm using. Unfortunately I cannot see the issue, so I hope that someone will be smarter than me.
The problem is the IOPS I can get from a ZFS pool with 6 1TB sata disks and 2 SSD for ZIL and cache. I'm nowhere near the expected 600 iops from the disks alone and the 2000-3000 when using the zil. I actually get ~60 with the ZIL turned off and ~160 with the ZIL on.
The pool is configured as a 3 vdevs of 2 disks in a mirror and two SSD partitioned in a mirrored ZIL + 2xcache partitions:
Code:
# pveperf /rpool/t/
CPU BOGOMIPS: 57529.56
REGEX/SECOND: 2119840
HD SIZE: 1537.23 GB (rpool/t)
FSYNCS/SECOND: 161.78
If I disable sync on the test partition (zfs set sync=disabled rpool/t) I get an astonishing 20000 IOPS which tells me that the ZIL device is not working at all.
Code:
# pveperf /rpool/t/
CPU BOGOMIPS: 57529.56
REGEX/SECOND: 2221426
HD SIZE: 1537.22 GB (rpool/t)
FSYNCS/SECOND: 20918.49
PVE: pve-manager/3.4-6/102d4547 (running kernel: 2.6.32-39-pve)
ZFS: Loaded module v0.6.4.1-1, ZFS pool version 5000, ZFS filesystem version 5
This is the ZFS config
Code:
# zpool status
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 4h54m with 0 errors on Thu Jan 28 19:29:30 2016
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
scsi-35000c5007a49788d-part2 ONLINE 0 0 0
scsi-35000c5007a496f40-part2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
scsi-35000c5007a4ddce6 ONLINE 0 0 0
scsi-35000c5007a497529 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
scsi-35000c5007a4983e0 ONLINE 0 0 0
scsi-35000c5007a4a292a ONLINE 0 0 0
logs
mirror-3 ONLINE 0 0 0
scsi-3500a075110b740af-part1 ONLINE 0 0 0
scsi-35e83a9703a5a01e8-part1 ONLINE 0 0 0
cache
scsi-3500a075110b740af-part2 ONLINE 0 0 0
scsi-35e83a9703a5a01e8-part2 ONLINE 0 0 0
errors: No known data errors
And here's the iostat of zpool. As you see cache seems unused too:
Code:
# zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
-------------------------------- ----- ----- ----- ----- ----- -----
rpool 1.14T 1.58T 16 47 81.6K 194K
mirror 390G 538G 4 14 23.9K 47.6K
scsi-35000c5007a49788d-part2 - - 1 4 16.9K 83.2K
scsi-35000c5007a496f40-part2 - - 1 4 16.3K 83.2K
mirror 391G 537G 5 14 29.1K 46.0K
scsi-35000c5007a4ddce6 - - 2 5 16.2K 46.7K
scsi-35000c5007a497529 - - 2 5 16.0K 46.7K
mirror 390G 538G 5 16 28.6K 53.3K
scsi-35000c5007a4983e0 - - 2 5 16.2K 54.1K
scsi-35000c5007a4a292a - - 2 5 15.4K 54.1K
logs - - - - - -
mirror 40.4M 7.90G 0 1 0 47.4K
scsi-3500a075110b740af-part1 - - 0 1 25 47.4K
scsi-35e83a9703a5a01e8-part1 - - 0 1 25 47.4K
cache - - - - - -
scsi-3500a075110b740af-part2 462M 194G 0 1 2.52K 11.7K
scsi-35e83a9703a5a01e8-part2 458M 91.3G 0 0 1.41K 12.0K
-------------------------------- ----- ----- ----- ----- ----- -----
Any hints?