[SOLVED] High IO Delay

Whattteva

New Member
Feb 16, 2023
20
8
3
Hello, I have a high IO delay issue reaching a peak of 25% that has been bothering me for quite some time. I've seen a few threads about it but none of them really solved my issue.

I've noticed that I experience it under these conditions:
  • Copying/moving files within a Windows 11 VM.
  • Installing MacOS in a VM.
  • Cloning a VM through Proxmox web admin.
In all of these cases, IO delay coincides with high Disk IO and CPU usage does not seem to be of relevance. Any help is appreciated to at least point me in the right direction to a resolution.

My specs are as follows:
  • Supermicro X11SPI-TF
  • Intel Xeon Silver 4210T (10c/20t) Cascade Lake 2.3/3.2 GHz 95 W
  • 224 GB DDR4 2400 ECC LRDIMM
  • 2x Inland Professional 512 GB SSD - Mirrored
  • Proxmox VE 7.3-3
If more details is needed, let me know.
 
Can you provide more information about your disk setup? Are you using the SSD mirror both for PVE and as storage for your VMs? A simple mirror of SSDs does not provide much performance when multiple VMs are running on the same storage and heavy I/O operations are running.
 
They're setup just using whatever the default Proxmox installation does, nothing custom. Also yes, I am using them for both Proxmox and the VM storage.

I do run about 6 VM's, but all of them are basically idle. Whenever I get these IO delay events, I can trace it to exactly either something disk heavy I'm doing in 1 VM (ie. file copy) or I'm cloning a VM on the Proxmox UI.

I understand that 1 mirror isn't all that fast, but I find it odd that just a simple file copy in 1 VM could slow down everything to a crawl (including other VM's).
 
Last edited:
Just a first guess:
Is any component overheating and throttling on higher temps? 'Action' like copying pushes just over the limit?
Is in the BIOS anything to ASPM related active? If yes, try disabling it.
Are your vms configured with virtio-disk-controller or scsi-single? (if sata that's not really fast)
Try writeback-cache in your vms, instead of off.
 
Just a first guess:
Is any component overheating and throttling on higher temps? 'Action' like copying pushes just over the limit?
Unlikely, IPMI isn't giving me any alerts and CPU usage is low besides disk activity.

Is in the BIOS anything to ASPM related active? If yes, try disabling it.
Not sure about this one. I left most things on default. I'd like to check it but it would require me to take the server down, which isn't really feasible right now, but I will look into this.

Are your vms configured with virtio-disk-controller or scsi-single? (if sata that's not really fast)
They're all set as VirtIO SCSI Single and they have the VirtIO guest drivers installed.

Try writeback-cache in your vms, instead of off.
No writeback cache. Could this be the issue? Is it safe to use the cache?
 
How mirrored drive ?
zfs ? hw raid ?

your ssd are cheap so you can't really avoid high io delay.
imo, best perf will be with one ssd for ext4 ( pve will use lvmthin to store vDisks) , then the other ssd as an ext4 disk to provide datastore for an install of pbs alongside pve, to daily backup.
write cache option in each vm need to be enabled to get best perf but lost data is possible if power failure.

... sorry for my english...
 
Unlikely, IPMI isn't giving me any alerts and CPU usage is low besides disk activity.
It is not only the cpu that can overheat, also the NIC, disk-controller, anything with a heat sink and not everything has a sensor to view remotely or detect at all.


No writeback cache. Could this be the issue? Is it safe to use the cache?
Could be or anything with these SSDs. I don't know them and can't compare how they behave/perform under best circumstances.
It should be safe for a test to try writeback to rule that out, but not for longtime/production in this constellation.
Maybe try with the win11-vm only, if it performs better etc.

your ssd are cheap so you can't really avoid high io delay.
Yup, just found them on amazon.
They don't perform good enough and the IO-delay starts exactly when the internal cache of these SSDs is reached (has nothing to do with the setting of the VM, that's another cache and as gabriel said it is unsafe and you will loose data on power failure if not corrupt the whole disks in the end)
 
Last edited:
How mirrored drive ?
zfs ? hw raid ?
ZFS.

your ssd are cheap so you can't really avoid high io delay.
Yeah, I'm aware that they're cheap SSD's. But, I just didn't think it would make such a significant IO delay just from a simple file copy within one VM.
I'm definitely open to replace them, but I just want to make sure that is indeed the root issue before going out and buying new SSD's just to discover that it doesn't solve the issue.

write cache option in each vm need to be enabled to get best perf but lost data is possible if power failure.
Yeah, this is why I'm not too keen on enabling it.
 
It is not only the cpu that can overheat, also the NIC, disk-controller, anything with a heat sink and not everything has a sensor to view remotely or detect at all.
I understand that, but the chassis is massive (Fractal Design Define 7 XL) with plenty of space and no expansion cards other than a SAS2 HBA. It also has 6x 140mm fans so it's well-ventilated.

Could be or anything with these SSDs. I don't know them and can't compare how they behave/perform under best circumstances.
It should be safe for a test to try writeback to rule that out, but not for longtime/production in this constellation.
Maybe try with the win11-vm only, if it performs better etc.
Thanks, I will be trying this out.

Yup, just found them on amazon.
They don't perform good enough and the IO-delay starts exactly when the internal cache of these SSDs is reached (has nothing to do with the setting of the VM, that's another cache and as gabriel said it is unsafe and you will loose data on power failure if not corrupt the whole disks in the end)
Hmm, understood. If the writeback cache experiment proves unfruitful, I will be looking into getting another pair of SSD's.
 
  • Like
Reactions: BAGUS
but I just want to make sure that is indeed the root issue before going out and buying new SSD's just to discover that it doesn't solve the issue.
So for only this short test it is ok to try writeback with one vm+reboot. If it's slightly better, then you know for sure.

Anyway I think the device has it enabled itself hardcoded (maybe changeable, but reverts on reboot). Try on the host: smartctl -x /dev/sdX | grep Writeback

Disabled = on the safe side
Enabled = data loss/corruption for sure on power loss
 
Last edited:
zfs is the worst case, please read forums topic, you'll get fired because use zfs on consumer ssd...
zfs is for entreprise storage disks + without a hw raid controller.
 
Last edited:
zfs is the worst case, please read forums topic, you'll get fired because use zfs on consumer ssd...
zfs is for entreprise storage disks + without a hw raid controller.
Jup. And that mainboard+CPU+RAM costs over 2000€ but you were only willing to spend 74€ on those SSD. Thats saving bucks at the wrong end.

Care to link me the thread for reference?
Also see the ZFS Benchmark Papers FAQ:
https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020
Can I use consumer or pro-sumer SSDs, as these are much cheaper than enterprise-class SSDs?
No. Never. These SSDs wont provide the required performance, reliability or endurance. See the fio results from before and/or run your own fio tests.
1676577731691.png
So your SSDs are probably somewhere in the range of that Crucial MX100.



And for ZFS on top of HW raid controllers see the OpenZFS documentation:
https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
 
Last edited:
  • Like
Reactions: mow
Jup. And that mainboard+CPU+RAM costs over 2000€ but you were only willing to spend 74€ on those SSD. Thats saving bucks at the wrong end.
Haha, I wasn't trying to save money or anything. I'm totally fine with buying new enterprise SSD's if need be. Those SSD's just happened to be something I already had on hand and didn't have to wait for shipping etc. to use. And really, for the most part, they've been totally fine other than the occasional spikes of IO delay (I've been using them for about 3 months).

Also see the ZFS Benchmark Papers FAQ:
https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020

View attachment 46917
So your SSDs are probably somewhere in the range of that Crucial MX100.
Ah thank you for this. I will be ordering another pair of SSD's. Any recommendations for decently-priced ones?
 
Any recommendations for decently-priced ones?
https://yourcmc.ru/wiki/Ceph_performance recommends:
Micron 5100/5200, 9300. Maybe 5300, 7300 too
Seagate Nytro 1351/1551
HGST SN260
Intel P4500

There is also an excel list with benchmarks on more disks.
Ceph is more demanding on SSDs and what's good for ceph is also good for ZFS.
Besides that the link is a good lecture with explaining the different quality of SSDs (just ignore the ceph-theming :) )
 
I've ordered a pair of Intel DC S3500 480 GB. Those should be OK, I presume?
Not the greatest write performance and durability as they are designed for read-intense workloads but yes, should perform way better than the consumer disks you got right now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!