Poor network performance on VM or misconfiguration?

nireos

New Member
Jun 26, 2024
5
0
1
First, before you respond with the whole "it is all software based, ect...", I understand that. So please read the entirety of my post before responding. I am truly trying to understand if I have misconfigured something or if this is just how Proxmox is.

I am considering moving from Hyper-V to Porxmox. I have no experience with Proxmox; but do have considerable experience with VMWare. I was hoping there would be enough similarity to make this easy. I decided to conduct an Apples to Apples comparison before making the switch.

Scenario:
I built a brand new Windows 2022 server on my current Hyper-V environment. Single vCPU, 8GB memory. Nothing fancy. No special services. Just a base server install.

I exported the VM and followed a guide I found online for importing it into Proxmox. Done. Easy.

Now, there is where I need help. If I run an internet speed test (yes I know these are not the best tests but I am trying to keep it simple without getting in to a lot of technical iperfs, or other network tests), the VM caps out at 200Mbps.

This same VM on Hyper-V hit 800Mbps.

If I add vCPU and/or cores to the VM, then I can get it to hit the 800Mbps.

So, I decided to try the same VM on VMWare. On VMWare it performs as expected (800 Mbps) with a single vCPU.

Processor Type - I set this to "Host" per an article I found here on the forum. This did help.
NIC Type - VirtIO

So my question(s)....

1. Is this normal expected behavior?
2. Do I have something misconfigured either in the Proxmox host or the VM?
3. Why dos Hyper-V and/or VMWare see better performance without adding vCPU?
 
That link worked; but I don't think that is the answer. That says I set the Multiqueue value = to the number of vCPUs. Increasing the number of vCPUs already fixes the performance issue.

I want to understand why I need to increase the number of vCPUs on Proxmox to get the same performance I already get from Hyper-V and VMWare without increasing vCPUs.
 
Oh, sorry for that wrong link :rolleyes:

Did you benchmarked it with multiqueue?

Hyper-V did it via VMQ etc. (if the NIC (and Driver)supports it), take a look into the VM-related NIC-Settings in Hyper-V.

No rocket-sciene here.

And no, you don´t have to increase the Number of vCPUs, if the Performance in HV or VMWare was OK.
 
Last edited:
Oh, sorry for that wrong link :rolleyes:

Did you benchmarked it with multiqueue?

Hyper-V did it via VMQ etc. (if the NIC (and Driver)supports it), take a look into the VM-related NIC-Settings in Hyper-V.

No rocket-sciene here.

And no, you don´t have to increase the Number of vCPUs, if the Performance in HV or VMWare was OK.
I did test it with multiqueue and it still only gets around 200Mbps on an internet speed test.
 
So i think it must be a hardware related issue (Switch/NIC etc..)

Use iPerf for testing or similar.
While I cannot with 100% certainty same it is not a hardware issue; that is highly unprobable. This is 1 of 3 servers all purchased less than 1 year ago. It was previous running Hyper-V with no issue. It is connected to the same switch as the other servers and the same switch and ports that it was connected to as a Hyper-V server.

Thus I really think it is a configuration issue (my lack of knowledge of Proxmox) or Proxmox simply does not perform at the same level as Hyper-V or VMWare. Unfortunately, presales support basically told me to go pound sand. Hence why I am asking the community for help figuring out what it might be that I have misconfigured.

My other thought is it could be driver issue with the Broadcom NICs. I am not a fan of Broadcom NICs but that is what Dell put in the 3 servers.
 
Last edited: