Subject: RAM Upgrade Wreaking Havoc on Proxmox IO Performance

Fen

New Member
Apr 17, 2025
1
0
1
Having a heck of a time with a RAM upgrade messing up my Proxmox machine. Here are the hard facts:



Mobo: Supermicro X11DPL-i

RAM we are installing: M386AAK40B40-CWD6Q - 128GB x 8 = 1024 GB

RAM we are removing: M393A4K40BB2-CTD7Q - 32GB x 8 = 256 GB

Proxmox Version: 8.3.5



Symptoms:

On our old RAM (250 GB), we see IO delay on the server at 0.43%. With the new RAM installed (1 TB), we see IO delay at 10-15%, and it spikes to 40-50% regularly.
1744905637220.png



*Sorry cut off the %s in this pic, that’s peaking at 50%



Hard drives are like this:



NAME STATE READ WRITE CKSUM

HDD-ZFS_Pool ONLINE 0 0 0

mirror-0 ONLINE 0 0 0

ata-ST18000NM000J-2TV103_ZR50CD3M ONLINE 0 0 0

ata-ST18000NM000J-2TV103_ZR50CBK5 ONLINE 0 0 0

Errors: No known data errors



We have already set the arc_max to 16GB following these guidelines.

1744905613230.png


After making this change the VMs became usable, and the IO dropped a bit from a constant 40-50% to 10-15 only spiking to 40-50%. But the main symptom now is that all our VMs are getting no download speed.


1744905599300.png




We are on our second set of new RAM sticks for the 1TB, and we saw the same issue on both sets, so I think the RAM is good.



I need Next Steps, I need actionable ideas, I need your help! Thank you in advance for your wisdom! I'll be back checking this and available to provide details.
 
Check with fio, eg. "mkdir /root/fiotests /dev/shm/fiotests",
"cd /root/fiotests ; fio --name=randread.4k.50g --ioengine=posixaio --rw=randread --bs=4k --size=50g --numjobs=4 --iodepth=1 --runtime=60 --time_based --end_fsync=1 ; sleep 10 ; fio --name=randwrite.4k.50g --ioengine=posixaio --rw=randwrite --bs=4k --size=50g --numjobs=4 --iodepth=1 --runtime=60 --time_based --end_fsync=1 ; sleep 10"
"cd /dev/shm/fiotests ; fio --name=randread.4k.50g --ioengine=posixaio --rw=randread --bs=4k --size=50g --numjobs=4 --iodepth=1 --runtime=60 --time_based --end_fsync=1 ; sleep 10 ; fio --name=randwrite.4k.50g --ioengine=posixaio --rw=randwrite --bs=4k --size=50g --numjobs=4 --iodepth=1 --runtime=60 --time_based --end_fsync=1 ; rm -rf /root/fiotests /dev/shm/fiotests"
But anyway with 1TB ram I would limit zfs arc size to 128GB, see cmd arc_summery.
 
Last edited: