How to change to 10Gbps NIC Card option for better migration performance

hyporian

New Member
Nov 17, 2025
3
0
1
Hello everyone,

I’m currently testing migration performance between two test servers connected directly with a 10Gbps DAC SFP+ cable, but I cannot achieve anywhere near 10Gbps. My migration speed is stuck around 70 MB/s (~560 Mbps), which is far below expectations.

Also, both servers also have regular RJ45 NICs for GUI access and ISO uploads not to migrate.

I've also tried to install Server B with Ubuntu and use ftp to transfer file which averaging about 400MB/s - 800MB/s.

Hardware Setup​

Server A (Proxmox VE 9.0)​

  • Model: Dell PowerEdge R620 (Server model may differ in the future)
  • NIC: Chelsio T320 10GbE Dual-Port Adapter
  • OS: Proxmox VE 9.0
  • Disk: Dell Enterprise VJM47
  • Network configuration:
    auto vmbr1
    iface vmbr1 inet static
    address 10.10.10.2/24
    mtu 9000
    bridge-ports enp5s0
    bridge-stp off
    bridge-fd 0

Server B ( VMware ESXi 6.5)​

  • Model: HP ProLiant DL160 G6 (Server model may differ in the future and VMware version)
  • NIC: Intel X520 10GbE (SFP+)
  • OS: ESXi 6.5
  • Disk: Dell Enterprise VJM47
  • Network configuration:
    • Virtual Switches: Created a vSwitch1 using the 10GbE NIC and set the MTU to 9000.
    • Port group: PortGroup1, no VLAN, using vSwitch1.
    • VMkernel NICs: Added new VMKernel NIC using PortGroup1 and set the IP static 10.10.10.3/24 and set the MTU to 9000.
    • Physical NICs: Changing from Auto-negotiable to 1000Mbps, full Duplex

Problem​

Even though the DAC SFP+ link is 10Gbps and MTU is set to 9000 on both sides, during VM migration I cannot get more than ~70 MB/s.
I expected to reach:
  • at least 500–800 MB/s or ideally 900+ MB/s for 10GbE direct connection
  • Note: I have also tried to change the Migration Option in the Datacenter but the migration speed is still low.
But the migration performance is extremely low, far below 10Gbps.
 
Last edited:
You mentioned migration and vmware. So we can presume that you are doing ESX to PVE migration? If that guess is correct, then its safe to say that you are also using PVE ESXi migration wizard?

If the above is correct, you have to keep in mind that the "migration" is not just a stream of data. There are API exchanges, disk reads and data chunks extraction on ESX, transfer of those chunks in ESXi format over ESXi API to PVE, conversion of the data to PVE format, writing of data to disk.
All of this while potentially competing with other processes, encrypting/decrypting the data, and other operations.

It is unlikely that you are limited by network bandwidth.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
how do you test?
please use iperf3
I also tested the network bandwidth using iperf3, and it performs really well on the 10Gbps link.
(The IPs in my earlier examples are different because I changed them during testing.)
Thanks for the help!
 

Attachments

  • brave_o2D4SBglLi.png
    brave_o2D4SBglLi.png
    56.9 KB · Views: 8
Last edited:
You mentioned migration and vmware. So we can presume that you are doing ESX to PVE migration? If that guess is correct, then its safe to say that you are also using PVE ESXi migration wizard?

If the above is correct, you have to keep in mind that the "migration" is not just a stream of data. There are API exchanges, disk reads and data chunks extraction on ESX, transfer of those chunks in ESXi format over ESXi API to PVE, conversion of the data to PVE format, writing of data to disk.
All of this while potentially competing with other processes, encrypting/decrypting the data, and other operations.

It is unlikely that you are limited by network bandwidth.

Cheers


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I use the ESXi migration wizard in PVE.
So based on what you explained, even with a 10Gbps direct connection, the migration speed won’t significantly increase because the process has to go through multiple steps such as:
  • API communication
  • reading and extracting data from ESXi
  • transferring data chunks through the ESXi API
  • converting the disk format
  • writing the converted data on the Proxmox side
This means the network is not the main bottleneck. Instead, the entire ESXi → PVE conversion pipeline is what limits the performance.

So would this be considered a software-related limitation rather than a network-related issue? Correct me if i'm wrong, much appreciated.
 
  • Model: HP ProLiant DL160 G6 (Server model may differ in the future and VMware version
  • Disk: Dell Enterprise VJM47
Really HP G6 (super old server) and SSD? DId you even tested disk performance on the host?
 
So based on what you explained, even with a 10Gbps direct connection, the migration speed won’t significantly increase
In your first post you mentioned that you are already using a direct 10 Gbit connection.

You reported that your network throughput tops out at ~100 Mbit, yet you also tested and confirmed that a network-only benchmark can achieve much higher speeds. That does indeed indicate that the network is not your bottleneck.

If you have the time, you can:
  • measure disk read performance on the source
  • measure disk write performance on the target
This will exclude computational overhead. You can then combine the read/transfer/write results to get a baseline of your disk and network capabilities without any additional processing. Depending on the results, you can add more layers, for example, manually use ESXi API disk extraction over the network without QCOW conversion.

All of this depends on your willingness to invest time in performing these measurements.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox