Poor Disk performance

gametastic2014

New Member
Sep 5, 2021
8
0
1
24
Hi There
I am hoping someone can help me here, I have tried all sorts of storage options now for Proxmox all setup as shared storage. However I cant get my head round why they are all so terrible speed wise. I have just setup an SMB share on a windows box which running crystal disk mark on gets the below speeds:
1630795202178.png

However if i add this SMB share to proxmox then give another windows VM a Virtual hard disk on this storage it gets this speed:
1630795246379.png

I have now tried several things, including NFS, CIFS/SMB, iSCSI. iSCSI and CIFS/SMB from both Freenas, truenas and openmediavault.

Can anyone else why the disk gets so terrible after adding it to the Proxmox cluster. (added via Datacentre > storage menu)


FYI all the storage types above all yield little to no performance, similar to the speeds I am seeing in the 2nd screenshot.

Current proxmox version running is 7.0-8
There is also a 4GB LAGG setup on the switch for both the storage box and the Proxmox Host 2nd photo is over a 1G Link
This is over a 4G Link (not quite4 times the 1G either which is strange):
1630796501274.png
 
Last edited:
What does your VMs config file look like? Do you use virtio SCSI + SCSI + virtio drivers for that VM?
 
Its not clear where your first benchmark was run : on the host providing the SMB share directly or via CIFS from another host.
I would start with reducing the number of variables, for example:
- map the cifs share directly to proxmox and run the benchmark from the proxmox host, without adding a VM into mix.
- check network interfaces for errors
- do a network capture to check for retransmits
- reduce LAG to single port (you are not getting 4x performance from LAG - the hashing is done on IP basis and you only have two involved in a CIFS session). You are better off using multiple individual ports and multipath/multisession.



Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
There is also a 4GB LAGG setup on the switch for both the storage box and the Proxmox Host 2nd photo is over a 1G Link

lacp ?
if lacp, note that 1tcp connection can't use more than 1 link. (I don't known how smb protocol works exactly).
also, use layer3+4 for your bond (and same in your physical switchs), to loadbalance by ip-ports.
 
What does your VMs config file look like? Do you use virtio SCSI + SCSI + virtio drivers for that VM?
Currently using IDE or SATA, despite having the qemu agent installed it doesnt like virtio for some reason. However i get the same speed results using virtio on a linux box.
 
Its not clear where your first benchmark was run : on the host providing the SMB share directly or via CIFS from another host.
I would start with reducing the number of variables, for example:
- map the cifs share directly to proxmox and run the benchmark from the proxmox host, without adding a VM into mix.
- check network interfaces for errors
- do a network capture to check for retransmits
- reduce LAG to single port (you are not getting 4x performance from LAG - the hashing is done on IP basis and you only have two involved in a CIFS session). You are better off using multiple individual ports and multipath/multisession.



Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
First photo was directly on the Windows Host containing the storage

Do you have a preference on what I use to run the benchmark in proxmox directly.

The single port vs the 4 does make a different just not nearly as high as anticipated
 
lacp ?
if lacp, note that 1tcp connection can't use more than 1 link. (I don't known how smb protocol works exactly).
also, use layer3+4 for your bond (and same in your physical switchs), to loadbalance by ip-ports.
Not using lacp just a lag configured on the netgear switch. But the speed should still be a lot higher even with the 1G link without the LAG in place
 
Currently using IDE or SATA, despite having the qemu agent installed it doesnt like virtio for some reason. However i get the same speed results using virtio on a linux box.
Virtio SCSI should be way faster because it is paravirtualized and IDE/SATA is fully virtualized.
 
You could run iperf3 to see if your network is bottlenecking. Do you use virtio for your virtual NIC? That Intel E1000 is very slow.
 
You could run iperf3 to see if your network is bottlenecking. Do you use virtio for your virtual NIC? That Intel E1000 is very slow.
Hiya,
Thanks for the suggestion:
1630848469360.png
As you can see it is a virtio NIC now and using virtio virtual disk however it still yeilds no where near the same speed as the actual drive on the host (this is still over the 4G LAG)
1630848688438.png
 
Last edited:
Just added the same SMB share to my testing VM directly using map network drive and it is getting the expected speeds of a 4G LAG for read, writing not so much, but not so much a requirement, however this proves that it has to be proxmox breaking it somehow.
1630849078635.png
 
Currently using IDE or SATA, despite having the qemu agent installed it doesnt like virtio for some reason. However i get the same speed results using virtio on a linux box.

IDE/SATA will give terrible performance. Qemu Agent has nothing to do with the disk, you need the vertio drivers installed.

The root of the stable virtio ISO has an installer that installs everything - "virt-win-guest-tools.exe"

However, if you installed Windows on the VM ide/SATA to start off with, then you need to jump through some hoops to convert it to virtio. Just changing the hardware will cause a boot failure.
 
IDE/SATA will give terrible performance. Qemu Agent has nothing to do with the disk, you need the vertio drivers installed.

The root of the stable virtio ISO has an installer that installs everything - "virt-win-guest-tools.exe"

However, if you installed Windows on the VM ide/SATA to start off with, then you need to jump through some hoops to convert it to virtio. Just changing the hardware will cause a boot failure.
Hiya, Yes I know. I am just adding disks from the storage to test with as i don't want them to be slowed down by having the OS reading and writing during the test. I just cant figure out why the drive is soooo slow when there is no reason for it to be.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!