Strange patterns in resource graphs.

M_D

New Member
Dec 21, 2025
19
2
3
I have just noticed some strange, repeating patterns in the resource graphs on my host and VM's / CT's. To start with, the following are from the host machine:

CPU:
Server CPU.png
Net:
Server Net.png
CPU Pressure Stall:
Server CPU Pres Stall.png

You can start to see a bit of a pattern on the CPU. Nothing too odd so far...

Now this is from a Linux CT. I have two CT's that look exactly like the following, and two that do not.

CPU is quite a constant, low, flat graph.

Net:
Linux CT 1.png

And this is from Linux VM 1:
Linux VM 1.png

And this is from Linux VM 2, completely separate from the above machine:
Linux VM 2.png

There are two things really confusing me here.

A) What is causing the odd resource spikes on the VM / CT (any one, in isolation). These particular VM's and CT's are quite idle at the moment, I would dig into top command etc, but I am suspecting something broader than what is going on within any 1 VM / CT...

B) Why do multiple, seemingly unrelated VMs AND CTs show (almost) identical graphs? This is really puzzling me. To the point I am suspecting a PVE graphing issue, as I am pretty sure each VM is not seeing spikes of 14M network traffic (based on stats from elsewhere on the network)...

And for reference, here are a couple of Windows VM's which are being quite well-behaved:

1771321207864.png 1771321227759.png

I am not experiencing any specific issues (performance, etc), but I would like an explanation for this odd resource stats reporting behaviour!
 
Just something to think about. I observe similar patterns with my Docker LCX.

However, in my case, it's completely explainable. Motioneye runs there alongside many other services. The darker the night, the less network data comes from the cameras. Motion has less to process, and CPU usage drops noticeably every night.

cpu.png
nw.png
 
  • Like
Reactions: Johannes S
Interesting.

I haven't managed to think up a reason for the patterns on mine, but what is actually more odd than the repeating spikes, is how the stats for several different, unrelated VM's look basically identical - the same spikes at the same times.

I haven't yet had time to purposely load up one of the VMs with a process within the VM and watch the graphs - but as it stands it almost looks like either:

A) Proxmox is graphing wrong somehow.
or
B) Something on the host, is causing the measured resource use of multiple VMs to increase at the same time...

Be glad of any other input.