Wondering about load...


Renowned Member
Oct 20, 2008
Can someone explain to me what is causing a high load on my PVE hosts every night at 2am?
Maybe a backup job?

I would check crons (/etc/crontab, /etc/cron.d/, /etc/cron.daily/) as crontab -l as well as systemctl list-timers --all.
Last edited:
It`s definitly no VM that causes the load. It seems to be PVE itself. I couldnt identify the reason.
You could run top or htop when seeing it working that hard to identify the process.
1) ext4 on NVMe disk (Samsung 990-pro 2TB), i5-10500T@2.3Ghz CPU, 64GB RAM
2) PVE8.0.3, 2 VMs, (win10, Openmediavault)

root@pve1:~# pveperf
CPU BOGOMIPS:      55199.16
REGEX/SECOND:      4075159
HD SIZE:           93.93 GB (/dev/mapper/pve-root)
BUFFERED READS:    2220.44 MB/sec
FSYNCS/SECOND:     175.60
DNS EXT:           29.25 ms
DNS INT:           0.62 ms (local)

root@pve:~# df -h
Dateisystem          Größe Benutzt Verf. Verw% Eingehängt auf
udev                   32G       0   32G    0% /dev
tmpfs                 6,3G    1,2M  6,3G    1% /run
/dev/mapper/pve-root   94G    4,9G   85G    6% /
tmpfs                  32G     46M   32G    1% /dev/shm
tmpfs                 5,0M       0  5,0M    0% /run/lock
/dev/sda1             3,6T    1,5T  2,0T   44% /mnt/backup
/dev/fuse             128M     20K  128M    1% /etc/pve
/dev/sdb1             1,8T    943G  798G   55% /mnt/backup/extern/daily
tmpfs                 6,3G       0  6,3G    0% /run/user/0
Last edited:
I now believe it is faulty charts in the dashboard. I'll try to unravel it further with htop etc.
What about the fsync/second values that pveperf returns? Proxmox recommends +200. How can I reach this value? Or can you safely ignore the recommendation? The host is actually running smoothly.
Fsyncs are small sync writes. Consumer SSDs like your Evos can't cache sync writes because they don't got a power-loss protection. If you don't want terrible sync IOPS performance you need to replace your consumer SSDs with proper enterprise/datacenter SSDs that got a PLP.

Only other option would be to use those SSDs with a proper HW raid card with cache + BBU so the HW raid can cache those sync writes in it's secure cache.

Even a 10€ second-hand SATA enterprise SSD can easily achieve 40x times the fsync performance of your Evos...
Last edited:
  • Like
Reactions: _gabriel
But I'm talking about the 990-PRO version here. Do you have a recommendation for
appropriate, better HW? Do you actually notice it in daily use? Or will you be slowed down again somewhere else?
But I'm talking about the 990-PRO version here. Do you have a recommendation for
appropriate, better HW? Do you actually notice it in daily use? Or will you be slowed down again somewhere else?
EVO Pros are the same grade as normal EVOs. They use the same (not great) TLC NAND and are still consumer SSDs not meant to be used with server workloads like databases and so on that heavily make use of sync writes.
Samsung PM983 would be an entry-level enterprise grade M.2 disk if you want sync write performance and stick with Samsung.
Last edited:
990 PRO is a cut-down "Pro-sumer" SSD. Previous "PROs" where at least using better NAND (MLC instead of TLC) while still lacking enterprise features like the PLP. But this isn't the case any longer. Its just wasted money to buy a Samsung PRO these days, as you pay a premium price without getting something premium. Either buy a cheap consumer "EVO" or some proper but more expensive enterprise SSD like the PM893 PM983.
And I would avoid buying a "QVO" with its QLC NAND, no matter what the usecase might be. The terrible write performance and reduced durability isn'T worth the few bucks that you save when buying a QVO instead of a EVO.
Last edited:
  • Like
Reactions: _gabriel
Sorry, transposed numbers...
"PM983" and "PM9A3" are the M.2 enterprise SSDs:

But keep in mind that they need a M.2 22110 slot like most enterprise M.2 SSDs. Only Micron and Kingston offer M.2 2280 enterprise SSDs and that only up to 1TB. Enterprise SSDs usually use U.2 or U.3 as M.2 is a pretty bad standard for servers as it was all about a small footprint for use in laptops and not about enough surface for cooling or to fit alot of NAND chips for high capacity or powercaps for the PLP.
I'd rather ignore pveperf values than uselessly burn money.
The PVE hosts work very well with standard hardware for up to 5VMs (at around €800 per host). If I want to scale further, I'd rather provide another host than pay for a "real", fat server. At the latest when I have to transfer data via network, fsync/second is irrelevant anyway.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!