[SOLVED] New Proxmox install with 10gbe NIC, but still getting slower 2.5gbe speed from old server.

blindguynar

New Member
May 21, 2025
3
1
3
This is likely a very easy question, but i've searched and can't seem to find the answer. And i wouldn't call myself a guru by any strech of the word, so forgive me if this is first day of grade school stuff.

I'm migrating to a new server that has 10gbe dual NIC. Old server was 2.5gbe dual NIC. I've created the bond & bridge correctly and ethtool correctly shows 20gbe for the bond/bridge. (and i know this is not the speed i'll get, but rather the bandwidth)

I'm migrating via Proxmox Backup Server. (Connected to both old/new boxes)

Basically I'm Shutting down CT's & VM's on old server, backing them up, and restoring them to new server.

My problem is: Any CT/VM i create from scratch in the new server gets the correct speed, but the CT's/VM's i'm pulling from PBS are retaining the 2.5gbe speeds somehow. I'm assuming somewhere they're configured that way, but i can't find it. I've compared new/old CT's/VM and network settings in the GUI are the same. I've looked in the CT/MV config files are they're the same. (both using vmbr0 bridge). The only diff is the vlan and i don't think i have restrictions on inter-vlan routing in my network. I can certainly try to put them or my client in native vlan to rule that out though. (didn't think to try that)

One was created from scratch and is getting the 10gbe & the other was pulled from backup and is getting 2.5gbe. The network in the containers appears same. iperf3 shows correct speeds for "from scratch" containers and shows old speeds for "migrated" containers.

I'm pulling my hair out trying to figure it out. The CT's & VM's are stopped on the old server, but the server is still running. I've tried CT's & VM's and both get 2.5gbe unless i create them from scratch so i assume there is some configuration within the CT/VM themselves, but i can't seem to locate it.

Anyone have any thoughts?
 

Attachments

  • Screenshot 2025-05-20 at 10.31.06 PM.png
    Screenshot 2025-05-20 at 10.31.06 PM.png
    30.1 KB · Views: 0
  • Screenshot 2025-05-20 at 10.30.29 PM.png
    Screenshot 2025-05-20 at 10.30.29 PM.png
    28.7 KB · Views: 0
  • Screenshot 2025-05-20 at 10.40.37 PM.png
    Screenshot 2025-05-20 at 10.40.37 PM.png
    46.1 KB · Views: 0
  • Screenshot 2025-05-20 at 10.39.45 PM.png
    Screenshot 2025-05-20 at 10.39.45 PM.png
    47.9 KB · Views: 0
here is an example of what i'm facing. Both iperf3's done from host one after another. As you can see the first one is slow compared to the 2nd. The first container was xfered from old server and the 2nd it on a container i created on new server from scratch.

root@proxmox-prime:~# iperf3 -c docker.home
Connecting to host docker.home, port 5201
[ 5] local 192.168.0.233 port 45048 connected to 192.168.50.183 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 267 MBytes 2.24 Gbits/sec 1 3.29 MBytes
[ 5] 1.00-2.00 sec 264 MBytes 2.21 Gbits/sec 0 3.29 MBytes
[ 5] 2.00-3.00 sec 269 MBytes 2.25 Gbits/sec 0 3.29 MBytes
[ 5] 3.00-4.00 sec 262 MBytes 2.20 Gbits/sec 0 3.29 MBytes
[ 5] 4.00-5.00 sec 259 MBytes 2.17 Gbits/sec 0 3.29 MBytes
[ 5] 5.00-6.00 sec 266 MBytes 2.23 Gbits/sec 0 3.29 MBytes
[ 5] 6.00-7.00 sec 256 MBytes 2.15 Gbits/sec 0 3.29 MBytes
[ 5] 7.00-8.00 sec 262 MBytes 2.20 Gbits/sec 0 3.29 MBytes
[ 5] 8.00-9.00 sec 265 MBytes 2.22 Gbits/sec 0 3.29 MBytes
[ 5] 9.00-10.00 sec 254 MBytes 2.13 Gbits/sec 2 2.62 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 2.56 GBytes 2.20 Gbits/sec 3 sender
[ 5] 0.00-10.00 sec 2.56 GBytes 2.20 Gbits/sec receiver

iperf Done.
root@proxmox-prime:~# iperf3 -c docker-too.home
Connecting to host docker-too.home, port 5201
[ 5] local 192.168.0.233 port 49278 connected to 192.168.0.109 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 7.74 GBytes 66.5 Gbits/sec 0 516 KBytes
[ 5] 1.00-2.00 sec 7.84 GBytes 67.3 Gbits/sec 0 577 KBytes
[ 5] 2.00-3.00 sec 7.80 GBytes 67.0 Gbits/sec 0 577 KBytes
[ 5] 3.00-4.00 sec 7.68 GBytes 66.0 Gbits/sec 0 638 KBytes
[ 5] 4.00-5.00 sec 7.84 GBytes 67.3 Gbits/sec 0 751 KBytes
[ 5] 5.00-6.00 sec 7.84 GBytes 67.3 Gbits/sec 0 1005 KBytes
[ 5] 6.00-7.00 sec 7.79 GBytes 66.9 Gbits/sec 0 1005 KBytes
[ 5] 7.00-8.00 sec 7.76 GBytes 66.6 Gbits/sec 0 1005 KBytes
[ 5] 8.00-9.00 sec 7.75 GBytes 66.6 Gbits/sec 0 1005 KBytes
[ 5] 9.00-10.00 sec 7.79 GBytes 66.9 Gbits/sec 0 1005 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 77.8 GBytes 66.9 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 77.8 GBytes 66.9 Gbits/sec receiver

iperf Done.
 
Nevermind, this is something on my network and inner-vlan routing. if the ct/vm is on the same vlan then i'm getting all the beans, but if on another vlan it must have to leave my aggregration switch and hit the router.

This is not PM related so i'll close and mark as solved.

This ends up being layer 2 routing vs. layer 3 routing. I thought my unifi aggregation switch was doing layer 3, but that is only the pro model. So same native vlan is routing without going back to the router, but other vlans are routing back to router, hence the reduced speed. I can do layer 3 on my pro-max-48-poe, but i've got to set up some things. Just putting this note here incase someone reads it with the same issue.

Told you in the OP that i was no guru and this just proves that! Got to start somewhere!
 
Last edited:
  • Like
Reactions: UdoB