changing the management interface?

Aug 20, 2021
25
0
6
34
I'd like to change the network setup of the server by bonding the onboard NICs and having the main bridge be attached to it which has the management IP and change it slightly. So instead of the bridged NIC the NICs are bonded and the bridged attached to the bond instead.

Can i do this from the GUI and reboot or is there anything more to changing this?

And will this allow a single VM bandwidth more than a single NIC if using load balanced bonding?
 
I'd like to change the network setup of the server by bonding the onboard NICs and having the main bridge be attached to it which has the management IP and change it slightly. So instead of the bridged NIC the NICs are bonded and the bridged attached to the bond instead.

Can i do this from the GUI and reboot or is there anything more to changing this?
Yes, you can do that with the WebUI.
And will this allow a single VM bandwidth more than a single NIC if using load balanced bonding?
That depends. Bonding won't make your network faster, you just increase the bandwidth. So if you bond 2 Gbit NICs two different hosts can access that single VM with 1 Gbit each but a single host won't be able to access that VM with 2 Gbit. Each connection is still limited to 1 Gbit. If your managed switch supports LACP with layer3+4 bonding a sinlge host could access a single VM with 2 Gbit but in that case each connection is still limited to 1 Gbit (so you get 1 Gbit per port and not just 1 Gbit per host). But that host could for example use 1 Gbit with SMB + 1 Gbit with NFS at the same time. If you really want more speed and not just more bandwidth, you would need to buy a faster NIC (second hand 10 Gbit SFP+ are quite cheap now) instead of using bonding.
Bonding is like adding another lane to a road without increasing the speed limit. The road than can handle more cars at the same time without traffic jams but because the speedlimit isn't increased each car needs the same time to reach its desination.
 
Last edited:
Yes, you can do that with the WebUI.

That depends. Bonding won't make your network faster, you just increase the bandwidth. So if you bond 2 Gbit NICs two different hosts can access that single VM with 1 Gbit each but a single host won't be able to access that VM with 2 Gbit. Each connection is still limited to 1 Gbit. If your managed switch supports LACP with layer3+4 bonding a sinlge host could access a single VM with 2 Gbit but in that case each connection is still limited to 1 Gbit (so you get 1 Gbit per port and not just 1 Gbit per host). But that host could for example use 1 Gbit with SMB + 1 Gbit with NFS at the same time. If you really want more speed and not just more bandwidth, you would need to buy a faster NIC (second hand 10 Gbit SFP+ are quite cheap now) instead of using bonding.
Bonding is like adding another lane to a road without increasing the speed limit. The road than can handle more cars at the same time without traffic jams but because the speedlimit isn't increased each car needs the same time to reach its desination.
i'm going to be using a managed switch. I know the concept i just wanted to confirm if proxmox could do it. I had issues trying to get more than 10Gb/s allocating a quad port PCIe NIC to the VMs directly even with the right bonding since it seems that assigning just 1 PCIe gives access to all 4 NICs but at only 1/4 the bandwidth. In this case proxmox will be handling the NIC.
 
Network performance above 2,5Gbit really depends on how fast your CPU can handle the packets. My 16 thread 2,3 GHz CPU for example can't handle more than 3-4 Gbit of my 10Gbit NIC if I use the default 1500 MTU because there are just way too much packets so my CPU can't handle this.
If I switch to jumboframes (9000 MTU) there are less but bigger packets and 10 Gbit is working fine.
 
Last edited:
Network performance above 2,5Gbit really depends on how fast your CPU can handle the packets. My 16 thread 2,3 GHz CPU for example can't handle more than 3-4 Gbit of my 10Gbit NIC if I use the default 1500 MTU because there are just way too much packets so my CPU can't handle this.
If I switch to jumboframes (9000 MTU) there are less but bigger packets and 10 Gbit is working fine.
iperf has no issues, does it like a champ. in the bench i got above 9Gbit/s without jumbo frames. CPU doesn't have to be fast you either have the VM handle the NIC directly either through PCIe passthrough or SR-IOV, or you use virtio and have debian handle it. I have a much older file server consisting of the phenom ii that can transcode a single 4k plex video (4k out, in is higher than 4k) live and in files can fully use the SFP+ i give it. You dont need much CPU for 10Gb/s but you do need to make sure your your setup is right. The issue i have is that proxmox won't allow me to give multiple PCIe ports from the same card to fully utilise the bandwidth. 4 port NIC shows 4 PCIe devices in the hardware list, but you can only assign one of the PCIe links which exposes the VM to all 4 NICs of the card but at 1/4 of the PCIe link width, limiting your maximum throughput to just 1 of the NICs while proxmox won't allow all 4 PCIe links to be assigned to the same VM giving the error "device already assigned". For this i used an epyc2 to test.

For performance one of the bigger issues are your bios settings as i have never had any CPU usage issues on older hardware for doing 10G networking even some of the more hated older AMDs. I used proxmox on much older hardware when designing large website architectures and even on old hardware i am able to get 300 transactions/s using 1st gen iseries xeons.

I've used samba over 10G on old hardware, limiting factor was the HDDs, but it was a large software raid array and plenty of ram for caching.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!