Bad fsync on /dev/mapper/pve-root ?

mcflym

Renowned Member
Jul 10, 2013
195
10
83
Hi,

i did a fresh proxmox-install on a SSD.

If I run pveperf I get the following result:

root@pve:~# pveperf
CPU BOGOMIPS: 179207.68
REGEX/SECOND: 3038160
HD SIZE: 28.91 GB (/dev/mapper/pve-root)
BUFFERED READS: 522.68 MB/sec
AVERAGE SEEK TIME: 0.08 ms
FSYNCS/SECOND: 223.16
DNS EXT: 38.34 ms
DNS INT: 48.91 ms (fritz.box)

Here is another machine (the root is an nvme drive):
root@pvefw:~# pveperf
CPU BOGOMIPS: 48000.00
REGEX/SECOND: 897602
HD SIZE: 58.07 GB (/dev/mapper/pve-root)
BUFFERED READS: 338.97 MB/sec
AVERAGE SEEK TIME: 0.08 ms
FSYNCS/SECOND: 1079.61
DNS EXT: 37.57 ms
DNS INT: 1.14 ms (localhost)

They both are not great, right? How can I fix the DNS INT from the first one?

If I run pveperf on /dev/mapper
root@pve:~# pveperf /dev/mapper
CPU BOGOMIPS: 179207.68
REGEX/SECOND: 3018220
HD SIZE: 31.32 GB (udev)
FSYNCS/SECOND: 167080.28
DNS EXT: 33.38 ms
DNS INT: 52.09 ms (fritz.box)

root@pvefw:~# pveperf /dev/mapper
CPU BOGOMIPS: 48000.00
REGEX/SECOND: 883821
HD SIZE: 15.64 GB (udev)
FSYNCS/SECOND: 27287.82
DNS EXT: 44.54 ms
DNS INT: 1.31 ms (localhost)

my etc/fstab looks like:

pve:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=D4C4-AB55 /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

UUID=b332dd18-14db-4aea-8424-62a46e143467 /mnt/proxmox_vm ext4 discard,noatime,nodiratime 0 1
UUID=74cf6b19-807f-4528-9f87-d19fe89036f6 /mnt/proxmox_backup ext4 discard,noatime,nodiratime 0 1

pvefw:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext4 errors=remount-ro 0 1
UUID=5D14-25AB /boot/efi vfat defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0

I looked around but I can't get it figured out what causes these bad fsyncs?!
 
They both are not great, right?
How do you come to that conclusion? They look perfectly normal for a consumer-grade SSD or NVMe drive to me.

If I run pveperf on /dev/mapper
/dev/mapper (well, /dev/* in general) is a tmpfs - meaning pveperf /dev/mapper measures not your disk but your RAM speed, which is obviously much higher, especially in fsyncs (random read/write essentially).

How can I fix the DNS INT from the first one?
Again, that result is not really terrible. If you want to improve it, you'd have to look at configuring your DNS server differently (e.g. use a different upstream DNS).

It seems like the second server managed to hit some kind of cache with it's DNS resolving, since the result is so drastically better, so not really a reliable comparison either.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!