[SOLVED] Poor ZFS writing speed

Progon

Member
Jan 16, 2021
4
0
6
27
Hello togehter,

I bought 3 wd80edaz-11ta3a0 8TB hard drives a few days ago. I have these running with ZFS in Raidz. In addition, I forward them via mountpoint to a ct that does smb. When I tried to test the speed, it settled at about 45mb/s.

specs about my proxmox server:

I5 4440
18GB Ram
480gb ssd for vm
3x8tb for data
I still have 1x120gb ssd and 1x240gb ssd here in the backup

Zpool list
Code:
NAME     SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
ssd480   444G  19.4G   425G        -         -    23%     4%  1.00x    ONLINE  -
wd16tb  21.8T  83.3G  21.7T        -         -     0%     0%  1.00x    ONLINE  -



zpool status
Code:
pool: ssd480
 state: ONLINE
  scan: scrub repaired 0B in 0 days 00:01:11 with 0 errors on Sun Jan 10 00:25:12 2021
config:

        NAME                                 STATE     READ WRITE CKSUM
        ssd480                               ONLINE       0     0     0
          ata-SATA3_480GB_SSD_2020091901333  ONLINE       0     0     0

errors: No known data errors

  pool: wd16tb
 state: ONLINE
  scan: none requested
config:

        NAME                                   STATE     READ WRITE CKSUM
        wd16tb                                 ONLINE       0     0     0
          raidz1-0                             ONLINE       0     0     0
            ata-WDC_WD80EDAZ-11TA3A0_VGH42L5G  ONLINE       0     0     0
            ata-WDC_WD80EDAZ-11TA3A0_VGH4P9WG  ONLINE       0     0     0
            ata-WDC_WD80EDAZ-11TA3A0_VGH4ZT3G  ONLINE       0     0     0

nfs list
Code:
NAME                       USED  AVAIL     REFER  MOUNTPOINT
ssd480                    56.4G   374G       24K  /ssd480
ssd480/subvol-101-disk-0  1.54G  48.5G     1.54G  /ssd480/subvol-101-disk-0
ssd480/subvol-103-disk-0  3.26G  21.7G     3.26G  /ssd480/subvol-103-disk-0
ssd480/vm-104-disk-0         3M   374G       12K  -
ssd480/vm-104-disk-1      51.6G   411G     14.5G  -
wd16tb                    55.5G  14.0T     55.5G  /wd16tb
 

Attachments

  • arcstat.txt
    arcstat.txt
    4.3 KB · Views: 3
  • ss.PNG
    ss.PNG
    20.4 KB · Views: 9
Raidz isn't that bad. Atleast for sequential async writes. My FreeNAS is using 4x 8TB CMR HDDs as raidz1 and I'm always capped at 118MB/s by my Gbit NIC.
 
... It all depends on WHAT you are writing. Small HTML files? 45mb/sec ->awesome
Iso images? Not so much.
Couldn't find much about your drives. Seem to be done WD red labeled ones?!
Also no indicator if drives are CMR/PMR or SMR...
 
That drives should be WD "white" label shucked from WD Elements USB HDDs. They should be air and not helium filled and run at 7200 RPM (instead of the 5400 rpm on the label) and be CMR...according to datahoarder reddit. So they should be fine and should do up to 210MB/s writing big sequiential async data.
 
Last edited:
I just see that your IO delay is up to 70%. Thats way to high.

What does zfs get all wd16tb and zpool iostat wd16tb return while copying?

I would think your transfer speed is so slow because somehow ZFS is doing alot of IOPS and your HDDs can't handle them.
Is maybe one of the VMs using that wd16tb pool too?
 
Last edited:
zfs get all wd16tb
Code:
wd16tb  type                  filesystem             -
wd16tb  creation              Sat Jan 16 19:58 2021  -
wd16tb  used                  60.2G                  -
wd16tb  available             14.0T                  -
wd16tb  referenced            60.2G                  -
wd16tb  compressratio         1.26x                  -
wd16tb  mounted               yes                    -
wd16tb  quota                 none                   default
wd16tb  reservation           none                   default
wd16tb  recordsize            128K                   default
wd16tb  mountpoint            /wd16tb                default
wd16tb  sharenfs              off                    default
wd16tb  checksum              on                     default
wd16tb  compression           lz4                    local
wd16tb  atime                 on                     default
wd16tb  devices               on                     default
wd16tb  exec                  on                     default
wd16tb  setuid                on                     default
wd16tb  readonly              off                    default
wd16tb  zoned                 off                    default
wd16tb  snapdir               hidden                 default
wd16tb  aclinherit            restricted             default
wd16tb  createtxg             1                      -
wd16tb  canmount              on                     default
wd16tb  xattr                 sa                     local
wd16tb  copies                1                      default
wd16tb  version               5                      -
wd16tb  utf8only              off                    -
wd16tb  normalization         none                   -
wd16tb  casesensitivity       sensitive              -
wd16tb  vscan                 off                    default
wd16tb  nbmand                off                    default
wd16tb  sharesmb              off                    default
wd16tb  refquota              none                   default
wd16tb  refreservation        none                   default
wd16tb  guid                  11063783818216270086   -
wd16tb  primarycache          all                    default
wd16tb  secondarycache        all                    default
wd16tb  usedbysnapshots       0B                     -
wd16tb  usedbydataset         60.2G                  -
wd16tb  usedbychildren        4.25M                  -
wd16tb  usedbyrefreservation  0B                     -
wd16tb  logbias               latency                default
wd16tb  objsetid              54                     -
wd16tb  dedup                 off                    default
wd16tb  mlslabel              none                   default
wd16tb  sync                  standard               default
wd16tb  dnodesize             legacy                 default
wd16tb  refcompressratio      1.26x                  -
wd16tb  written               60.2G                  -
wd16tb  logicalused           75.7G                  -
wd16tb  logicalreferenced     75.7G                  -
wd16tb  volmode               default                default
wd16tb  filesystem_limit      none                   default
wd16tb  snapshot_limit        none                   default
wd16tb  filesystem_count      none                   default
wd16tb  snapshot_count        none                   default
wd16tb  snapdev               hidden                 default
wd16tb  acltype               off                    default
wd16tb  context               none                   default
wd16tb  fscontext             none                   default
wd16tb  defcontext            none                   default
wd16tb  rootcontext           none                   default
wd16tb  relatime              off                    default
wd16tb  redundant_metadata    all                    default
wd16tb  overlay               off                    default
wd16tb  encryption            off                    default
wd16tb  keylocation           none                   default
wd16tb  keyformat             none                   default
wd16tb  pbkdf2iters           0                      default
wd16tb  special_small_blocks  0                      default
root@pve:~# zfs get all wd16tb
NAME    PROPERTY              VALUE                  SOURCE
wd16tb  type                  filesystem             -
wd16tb  creation              Sat Jan 16 19:58 2021  -
wd16tb  used                  60.6G                  -
wd16tb  available             14.0T                  -
wd16tb  referenced            60.6G                  -
wd16tb  compressratio         1.26x                  -
wd16tb  mounted               yes                    -
wd16tb  quota                 none                   default
wd16tb  reservation           none                   default
wd16tb  recordsize            128K                   default
wd16tb  mountpoint            /wd16tb                default
wd16tb  sharenfs              off                    default
wd16tb  checksum              on                     default
wd16tb  compression           lz4                    local
wd16tb  atime                 on                     default
wd16tb  devices               on                     default
wd16tb  exec                  on                     default
wd16tb  setuid                on                     default
wd16tb  readonly              off                    default
wd16tb  zoned                 off                    default
wd16tb  snapdir               hidden                 default
wd16tb  aclinherit            restricted             default
wd16tb  createtxg             1                      -
wd16tb  canmount              on                     default
wd16tb  xattr                 sa                     local
wd16tb  copies                1                      default
wd16tb  version               5                      -
wd16tb  utf8only              off                    -
wd16tb  normalization         none                   -
wd16tb  casesensitivity       sensitive              -
wd16tb  vscan                 off                    default
wd16tb  nbmand                off                    default
wd16tb  sharesmb              off                    default
wd16tb  refquota              none                   default
wd16tb  refreservation        none                   default
wd16tb  guid                  11063783818216270086   -
wd16tb  primarycache          all                    default
wd16tb  secondarycache        all                    default
wd16tb  usedbysnapshots       0B                     -
wd16tb  usedbydataset         60.6G                  -
wd16tb  usedbychildren        4.20M                  -
wd16tb  usedbyrefreservation  0B                     -
wd16tb  logbias               latency                default
wd16tb  objsetid              54                     -
wd16tb  dedup                 off                    default
wd16tb  mlslabel              none                   default
wd16tb  sync                  standard               default
wd16tb  dnodesize             legacy                 default
wd16tb  refcompressratio      1.26x                  -
wd16tb  written               60.6G                  -
wd16tb  logicalused           76.1G                  -
wd16tb  logicalreferenced     76.1G                  -
wd16tb  volmode               default                default
wd16tb  filesystem_limit      none                   default
wd16tb  snapshot_limit        none                   default
wd16tb  filesystem_count      none                   default
wd16tb  snapshot_count        none                   default
wd16tb  snapdev               hidden                 default
wd16tb  acltype               off                    default
wd16tb  context               none                   default
wd16tb  fscontext             none                   default
wd16tb  defcontext            none                   default
wd16tb  rootcontext           none                   default
wd16tb  relatime              off                    default
wd16tb  redundant_metadata    all                    default
wd16tb  overlay               off                    default
wd16tb  encryption            off                    default
wd16tb  keylocation           none                   default
wd16tb  keyformat             none                   default
wd16tb  pbkdf2iters           0                      default
wd16tb  special_small_blocks  0                      default

zpool iostat wd16tb
Code:
root@pve:~# zpool iostat wd16tb
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wd16tb      97.8G  21.7T      0     22   136K  3.44M

The IO delay is not so high as yesterday

I run 1 ct where wthe raidz is mountet to via mountpoint but i testet it while it was shutdown and up and the results are the same
 

Attachments

  • iodelay.PNG
    iodelay.PNG
    11.8 KB · Views: 16
  • smb speed.PNG
    smb speed.PNG
    10.8 KB · Views: 17
Last edited:
i found my stupid issue... It was my internal drive that is broken and copy with just 50mb/s
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!