VMS running very Slowly

thegreekman

New Member
Jul 3, 2023
5
0
1
I have a proxmox V 8.0.3 server running on baremetal. These are the Specs.
CPU - AMD 3950x - 16 cores, 32 hyper-threaded
RAM - 32 GB
Storage - 1TB SSD, 4TB HDD - both configured in an LVM Thin fashion
I have 3 VMS running - all utilizing 4 CPUS and 2GB of RAM each. They are using the HDD as storage. 1 server is ubuntu, the other 2 are debian.

As soon as I boot up another VM, doesn't matter the specs - my io speed on all vms drops from ~100MB/s to 30MB/s. If I boot up another VM on top of that, it drops to ~5MB/s.
I have no idea why this is happening, and I have no idea how to fix it. Any help is much appreciated.
 
Last edited:
As soon as I boot up another VM, doesn't matter the specs - my io speed on all vms drops from ~100MB/s to 30MB/s. If I boot up another VM on top of that, it drops to ~5MB/s.
I have no idea why this is happening, and I have no idea how to fix it. Any help is much appreciated.
Slow can mean many things but I assume you are talking about slow disk I/O. HDDs are already slow with one operating system and more so when running multiple operating systems at once, and even much more so when they use SMR. Enterprise SSD's in various RAID-formations can handle virtualization work-loads easily but a single QLC SSD is worse than a HDD. You did not give details about your drives, but generalizing from other threads on this forum, you are most likely using single cheap consumer drives.
 
Slow can mean many things but I assume you are talking about slow disk I/O. HDDs are already slow with one operating system and more so when running multiple operating systems at once, and even much more so when they use SMR. Enterprise SSD's in various RAID-formations can handle virtualization work-loads easily but a single QLC SSD is worse than a HDD. You did not give details about your drives, but generalizing from other threads on this forum, you are most likely using single cheap consumer drives.

Thank you for the response. By Slow I meant disk i/o. I cant even ssh to the vms after booting up a 4th VM, they just run like molasses.
I am using this HDD. Upon more research it looks like SMR drives are a bad choice.
Model Family: Seagate BarraCuda 3.5 (SMR)
Device Model: ST4000DM004-2CV104

My SSD is a Samsung SSD 970 EVO Plus 1TB.

Do you have any recommendations for HDDs i can get that will run VMs without a huge performance hit?
 
Thank you for the response. By Slow I meant disk i/o. I cant even ssh to the vms after booting up a 4th VM, they just run like molasses.
I am using this HDD. Upon more research it looks like SMR drives are a bad choice.
Model Family: Seagate BarraCuda 3.5 (SMR)
Device Model: ST4000DM004-2CV104
Proxmox itself will work fine when running on that HDD and you can store ISO's but don't use it for VMs.
My SSD is a Samsung SSD 970 EVO Plus 1TB.
That might just work for VMs, for a while. Best used for (non-Windows) VMs that don't write much.
Do you have any recommendations for HDDs i can get that will run VMs without a huge performance hit?
(second-hand) enterprise SSD with power loss protection (PLP) as you will find on this forum.
 
Do you have IO thread enabled on the VMs? You can check this with cat /etc/pve/qemu-server/NODE_NUMBER.conf and checking if the storage has iothread=1, e.g. scsi0: local:101/vm-101-disk-0.qcow2,iothread=1,size=32G.

I wouldn't expect decent io performance on a HDD (specially if it only does 5400RPM), but I would expect a bit more than 5MB/s if the three VMs are not doing constant io.
 
Proxmox itself will work fine when running on that HDD and you can store ISO's but don't use it for VMs.
Not even sure about that...
I also bought a ST4000DM004 and it is so slow, when the CMR cache gets full, the average response time drops from a couple of milliseconds to a couple of MINUTES! Worst Disk I ever had. Every cheap USB 2.0 pen drive will perform faster in comparison...
If I remember right it took me 2 weeks 24/7 copying to fill that 4TB with data. Now I only use it as a read-only storage, as writing anything to it should be avoided as it will slows down the whole machine, if it takes minutes for a response. Couldn't even use it for my bare metal Win10 machine. Each time when extracting some bigger zip files or installing a game, the PC got completely unusable for at least 2 hours until the disk was able to free up the cache again. And really a pain when you open the File Explorer, it tries to poll all the disks and the explorer then got stuck because the disk needs 8 Minutes for the answer...
 
Last edited:
Do you have IO thread enabled on the VMs? You can check this with cat /etc/pve/qemu-server/NODE_NUMBER.conf and checking if the storage has iothread=1, e.g. scsi0: local:101/vm-101-disk-0.qcow2,iothread=1,size=32G.

I wouldn't expect decent io performance on a HDD (specially if it only does 5400RPM), but I would expect a bit more than 5MB/s if the three VMs are not doing constant io.
Yes IO Thread was enabled. I switched everything over to an SSD and all is well now.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!