Poor pveperf performance

Mar 19, 2018
27
1
6
48
I think my proxmox node may not be performing well. According to the man page for pveperf, my FSYNC should be > 200. But I'm getting the below:

Code:
root@proxmox:~# pveperf
CPU BOGOMIPS:      25536.00
REGEX/SECOND:      3462283
HD SIZE:           2451.76 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     85.86
DNS EXT:           220.93 ms
DNS INT:           174.58 ms (seb)

I have a 120GB SSD ( which by the way is showing 7% wear ), and 3TB WD Green 7200rpm HDD. Both are members of the main ZFS rpool.

Also, my DNS times seems to be pretty slow compared to others i've seen. Especially for internal DNS.

I use pfsense on a VM ( on the proxmox node in question ) for internal DNS resolution, so thought it should be faster. Certainly pinging internal hosts via domain names from proxmox is 018ms, so definitely fast, and pinging 8.8.8.8 is 19.6ms. Curious as to why it's so slow using pveperf.

1. Is this performance cause for concern / possibly causing other stability issues? ( especially FSYNC )
2. How could I debug/check how to improve?
3. Is it a mistake to have the SSD part of the ZFS pool?

I'm thinking of buying multiple 7200rpm drives + new SSD to improve the storage situation, is this likely to help?

thanks for any input/guidance.

EDIT: I also get I/O delay of ~2 - 4% average when the node is at 30% CPU load and negligible memory use. Which i don't think is good.

image of dashboard: https://imgur.com/a/SMdzA64
 
Last edited:
I meant, should I have installed proxmox OS on the SSD, outside of ZFS pool, and then used the HDD for rpool, storing VM disks, etc.
sda is 120GB SSD
sdb is normal spinning HDD 7200rpm
Both are SATA disks


Code:
root@proxmox:~# zpool status
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 1h28m with 0 errors on Sun Apr 14 01:52:23 2019
config:

   NAME        STATE     READ WRITE CKSUM
   rpool       ONLINE       0     0     0
     sda2      ONLINE       0     0     0
     sdb       ONLINE       0     0     0

errors: No known data errors
root@proxmox:~# lsblk
NAME      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda         8:0    0 111.8G  0 disk
├─sda1      8:1    0  1007K  0 part
├─sda2      8:2    0 111.8G  0 part
└─sda9      8:9    0     8M  0 part
sdb         8:16   0   2.7T  0 disk
├─sdb1      8:17   0   2.7T  0 part
└─sdb9      8:25   0     8M  0 part
zd0       230:0    0     8G  0 disk [SWAP]
zd16      230:16   0   8.5G  0 disk
zd32      230:32   0  32.5G  0 disk
zd48      230:48   0   8.5G  0 disk
zd64      230:64   0  16.5G  0 disk
zd80      230:80   0   8.5G  0 disk
zd96      230:96   0  16.5G  0 disk
zd112     230:112  0   1.2T  0 disk
zd128     230:128  0   8.5G  0 disk
zd144     230:144  0    80G  0 disk
├─zd144p1 230:145  0     1G  0 part
└─zd144p2 230:146  0    79G  0 part
zd160     230:160  0   8.5G  0 disk
zd176     230:176  0    10G  0 disk
├─zd176p1 230:177  0   512K  0 part
├─zd176p2 230:178  0   9.5G  0 part
└─zd176p3 230:179  0   512M  0 part
zd192     230:192  0    10G  0 disk
├─zd192p1 230:193  0    10G  0 part
├─zd192p5 230:197  0   9.5G  0 part
└─zd192p6 230:198  0   512M  0 part
zd208     230:208  0   1.2T  0 disk
└─zd208p1 230:209  0   1.2T  0 part
zd224     230:224  0  16.5G  0 disk
 
This is not recommended.
You can do two things.
1.) As you suggest use the ssd as OS disk.
2.) use the ssd if it is an enterprise SSD as ZIL.

But I would recommend you use variant 1.
 
Completely re-built and re-installed. Now using a 1TB SSD for OS + VM Disks. Will use a 4TB HDD + 120GB SSD as a totally separate ZFS Pool that I can share over NFS for shared storage accesible on the network.

If I had enough NIC's I'd even separate the storage + normal network. I do have a VLAN capable switch though, but I think buying more NIC's to physically separate will be better?

pveperf is now about 12 - 15x faster than before on FSYNC :)

Code:
root@proxmox:~# pveperf
CPU BOGOMIPS:      25536.00
REGEX/SECOND:      3574454
HD SIZE:           898.21 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND:     1096.95
DNS EXT:           1002.23 ms
DNS INT:           1001.78 ms (seb)
root@proxmox:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool   928G  2.05G   926G         -     0%     0%  1.00x  ONLINE  -
root@proxmox:~# zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   rpool       ONLINE       0     0     0
     sda3      ONLINE       0     0     0

errors: No known data errors
root@proxmox:~# zfs list
NAME                       USED  AVAIL  REFER  MOUNTPOINT
rpool                     2.05G   897G   104K  /rpool
rpool/ROOT                1.26G   897G    96K  /rpool/ROOT
rpool/ROOT/pve-1          1.26G   897G  1.26G  /
rpool/data                 808M   897G    96K  /rpool/data
rpool/data/vm-100-disk-0   808M   897G   808M  -
rpool/data/vm-100-disk-1    68K   897G    68K  -
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!