Poor performance with ZFS

@Nemesiz is this normal in your opinion? old 2x 500GB drives in public with sync standard much faster than 4x 1TB relativy NEW WD RED NAS drives? there must be problem, i dont believe that the performance for rpool is normal and i need to buy another SSD ZIL 2,5TB drive...

Code:
root@pve-klenova:~# zfs get sync
NAME                      PROPERTY  VALUE     SOURCE
public                    sync      standard  default
public/vm-200-disk-1      sync      standard  default
rpool                     sync      standard  local
rpool/ROOT                sync      standard  inherited from rpool
rpool/ROOT/pve-1          sync      standard  inherited from rpool
rpool/data                sync      standard  inherited from rpool
rpool/data/vm-200-disk-1  sync      standard  inherited from rpool
rpool/data/vm-201-disk-1  sync      standard  inherited from rpool
rpool/data/vm-201-disk-2  sync      standard  inherited from rpool
rpool/data/vm-211-disk-1  sync      standard  inherited from rpool
rpool/swap                sync      always    local

root@pve-klenova:~# pveperf /rpool/
CPU BOGOMIPS:      38401.52
REGEX/SECOND:      437888
HD SIZE:           655.46 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     61.59
DNS EXT:           36.79 ms
DNS INT:           19.67 ms (elson.sk)

root@pve-klenova:~# pveperf /public/
CPU BOGOMIPS:      38401.52
REGEX/SECOND:      442513
HD SIZE:           0.56 GB (public)
FSYNCS/SECOND:     192.03
DNS EXT:           185.75 ms
DNS INT:           18.53 ms (elson.sk)
root@pve-klenova:~# zpool status
  pool: public
 state: ONLINE
  scan: scrub repaired 0 in 0h0m with 0 errors on Sun Oct  8 00:24:02 2017
config:

    NAME                        STATE     READ WRITE CKSUM
    public                      ONLINE       0     0     0
      ata-MB0500EBZQA_Z1M0EHYH  ONLINE       0     0     0
      ata-MB0500EBZQA_Z1M0EGEJ  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: scrub repaired 0 in 6h18m with 0 errors on Sun Oct  8 06:42:22 2017
config:

    NAME                                                STATE     READ WRITE CKSUM
    rpool                                               ONLINE       0     0     0
      mirror-0                                          ONLINE       0     0     0
        ata-WDC_WD10EFRX-68PJCN0_WD-WCC4J2021886-part2  ONLINE       0     0     0
        ata-WDC_WD10EFRX-68JCSN0_WD-WMC1U6546808-part2  ONLINE       0     0     0
      mirror-1                                          ONLINE       0     0     0
        ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J2AK75T9        ONLINE       0     0     0
        ata-WDC_WD10EFRX-68FYTN0_WD-WCC4J1JE0SFR        ONLINE       0     0     0

errors: No known data errors
 
You have configured differently. Public pool works like stripe raid0. rpool works like raid10. Your rpool works by the slowest HDD.

I like atop tool to see all server action in one monitor. To determine the slowest HDD watch for most io and most average io time.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!