[SOLVED] One Debian Guest VM extremely slow on Proxmox VE 9 - all other seem fine on the same Host

For reference the same base Load on VM 156 APTCacherNG which is fast
 

Attachments

  • 20250923_proxmox_156_atop_base_load.png
    20250923_proxmox_156_atop_base_load.png
    362 KB · Views: 3
@fabian: spotted the Issue, almost by coincidence, while cleaning up /etc/fstab File.

There was a sync Mount Option set for the / Filesystem in /etc/fstab :rolleyes: . Not sure why I did that some Time ago ...

So essentially every single write would be forced to be written immediately to Disk without Buffers/Cache. Thus IOPs drop from say 10k to around 100.

Now that I removed the sync Option and, while at it, also added noatime Mount Option for the / File System in /etc/fstab, it's back to around 10k IOPs :).

Just the first Iteration of the Benchmark Script, no need for more :):

Code:
Sleeping for 120 Seconds before starting Benchmark
FIO RANDOM IO Benchmark for Block Size = 512 and Queue Depth = 1
RAW Size = 1073741824 / RAW Block Size = 512 requires 2097152 Number of small Files
Writing one BIG File for Block Size = 512 and Queue Depth = 1:
==============================================================================================
write_iops: (g=0): rw=randwrite, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=1
fio-3.39
Starting 1 process
write_iops: Laying out IO file (1 file / 1024MiB)
Jobs: 1 (f=1): [w(1)][99.6%][w=5208KiB/s][w=10.4k IOPS][eta 00m:01s]
write_iops: (groupid=0, jobs=1): err= 0: pid=1900: Tue Sep 23 21:02:10 2025
  write: IOPS=8457, BW=4229KiB/s (4330kB/s)(1018MiB/246458msec); 0 zone resets
    slat (usec): min=7, max=294567, avg=31.46, stdev=247.67
    clat (nsec): min=1438, max=39950k, avg=84832.18, stdev=122029.35
     lat (usec): min=46, max=294576, avg=116.29, stdev=271.49
    clat percentiles (usec):
     |  1.00th=[    3],  5.00th=[    3], 10.00th=[    3], 20.00th=[   71],
     | 30.00th=[   76], 40.00th=[   81], 50.00th=[   86], 60.00th=[   90],
     | 70.00th=[   96], 80.00th=[  104], 90.00th=[  117], 95.00th=[  135],
     | 99.00th=[  225], 99.50th=[  285], 99.90th=[  529], 99.95th=[  766],
     | 99.99th=[ 4146]
   bw (  KiB/s): min= 1291, max= 5427, per=100.00%, avg=4231.13, stdev=756.63, samples=492
   iops        : min= 2582, max=10855, avg=8462.40, stdev=1513.28, samples=492
  lat (usec)   : 2=0.74%, 4=11.09%, 10=0.09%, 20=0.05%, 50=0.06%
  lat (usec)   : 100=63.31%, 250=23.91%, 500=0.63%, 750=0.06%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=4.51%, sys=15.53%, ctx=2084715, majf=0, minf=37
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2084387,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=1

Run status group 0 (all jobs):
  WRITE: bw=4229KiB/s (4330kB/s), 4229KiB/s-4229KiB/s (4330kB/s-4330kB/s), io=1018MiB (1067MB), run=246458-246458msec

Disk stats (read/write):
  sda: ios=1/2098210, sectors=80/4127840, merge=0/477220, ticks=0/224602, in_queue=239198, util=76.96%
==============================================================================================

EDIT 1: added [Solved] to the Thread Title (Prefix) since the Issue is now solved :) .