Proxmox internal networking speeds

Rowe

Member
Sep 14, 2019
16
0
21
Hello.

I have tried to virtualize my router (running OPNsense).
I did this in the beliefs that my other containers and virtual machines on the proxmox host, would use some kind of internal network link, and not have to go out the network card, just to go back in another. (I believe VMware ESXi does this)

Here is a little about my setup:
- My proxmox host, has a bond on one 2 port Intel NIC connected to a switch. This NIC is used for all of my VMs and containers, but not my OPNsense server.
- My OPNsense server has a 4 port intel NIC for itself. One port for WAN, and one port for LAN. I have done this is the belief that it would be more secure, since I think that it might be a good idea not the have the router running on the same NIC as all my other general VM stuff. On top of that, I also did it because I had one laying around.
- I first tried to pass this card through to OPNsense, but realized this would force the network traffic through the switch before going back into the proxmox host (correct me if this is wrong). Then I just made two regular network bridges on proxmox (one for WAN, the other for LAN), and used them instead.

I have then tried to run a iperf3 test between a couple of the VMs/containers and the router, to see the transfer speeds. The results are regular gigabit speeds.
If I then run iperf3 between a VM and another VM using the same network card, I see transfer speeds in the 10 gigabit scope. The results are the same between a VM/container or container/container.

I got a little confused by this, and the only reason I can come up with, is that it cannot transfer internally when using different NICs.
Either that, or I have misconfigured something on the proxmox host or OPNsense router.

I hope someone can clarify these things for me a little. I don't seem to be able to find anything about this on the internet.
Thanks.
 
You only get more than 1Gbit if the host is routing or all VMs are on the same bridge. If you got your OPNsense VM on one isolated bridge and the VMs on another isolated bridge then they can't directly communicate and will do that using your switch.

Why not use 1 NIC (or 2 as a bond) for for WAN (you can PCI passthrough a single ports) and the other 4 NICs as a LACP quad bond for your DMZ/LAN? If you want more then 1 Gbit you should make sure that the LAN/DMZ side of your OPNsense is on the same bridge as your other VMs.
 
Last edited:
You only get more than 1Gbit if the host is routing or all VMs are on the same bridge. If you got your OPNsense VM on one isolated bridge and the VMs on another isolated bridge then they can't directly communicate and will do that using your switch.

Why not use 1 NIC (or 2 as a bond) for for WAN (you can PCI passthrough a single ports) and the other 4 NICs as a LACP quad bond for your DMZ/LAN? If you want more then 1 Gbit you should make sure that the LAN/DMZ side of your OPNsense is on the same bridge as your other VMs.
After I posted my question, I thought of something exactly like this.
It is not really because I "need" more than 1 gigabit, but it would just ease out the load of the link, if it never would have to leave the NIC.

I will definitely try this out.
Thanks a lot.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!