Hello.
I have tried to virtualize my router (running OPNsense).
I did this in the beliefs that my other containers and virtual machines on the proxmox host, would use some kind of internal network link, and not have to go out the network card, just to go back in another. (I believe VMware ESXi does this)
Here is a little about my setup:
- My proxmox host, has a bond on one 2 port Intel NIC connected to a switch. This NIC is used for all of my VMs and containers, but not my OPNsense server.
- My OPNsense server has a 4 port intel NIC for itself. One port for WAN, and one port for LAN. I have done this is the belief that it would be more secure, since I think that it might be a good idea not the have the router running on the same NIC as all my other general VM stuff. On top of that, I also did it because I had one laying around.
- I first tried to pass this card through to OPNsense, but realized this would force the network traffic through the switch before going back into the proxmox host (correct me if this is wrong). Then I just made two regular network bridges on proxmox (one for WAN, the other for LAN), and used them instead.
I have then tried to run a iperf3 test between a couple of the VMs/containers and the router, to see the transfer speeds. The results are regular gigabit speeds.
If I then run iperf3 between a VM and another VM using the same network card, I see transfer speeds in the 10 gigabit scope. The results are the same between a VM/container or container/container.
I got a little confused by this, and the only reason I can come up with, is that it cannot transfer internally when using different NICs.
Either that, or I have misconfigured something on the proxmox host or OPNsense router.
I hope someone can clarify these things for me a little. I don't seem to be able to find anything about this on the internet.
Thanks.
I have tried to virtualize my router (running OPNsense).
I did this in the beliefs that my other containers and virtual machines on the proxmox host, would use some kind of internal network link, and not have to go out the network card, just to go back in another. (I believe VMware ESXi does this)
Here is a little about my setup:
- My proxmox host, has a bond on one 2 port Intel NIC connected to a switch. This NIC is used for all of my VMs and containers, but not my OPNsense server.
- My OPNsense server has a 4 port intel NIC for itself. One port for WAN, and one port for LAN. I have done this is the belief that it would be more secure, since I think that it might be a good idea not the have the router running on the same NIC as all my other general VM stuff. On top of that, I also did it because I had one laying around.
- I first tried to pass this card through to OPNsense, but realized this would force the network traffic through the switch before going back into the proxmox host (correct me if this is wrong). Then I just made two regular network bridges on proxmox (one for WAN, the other for LAN), and used them instead.
I have then tried to run a iperf3 test between a couple of the VMs/containers and the router, to see the transfer speeds. The results are regular gigabit speeds.
If I then run iperf3 between a VM and another VM using the same network card, I see transfer speeds in the 10 gigabit scope. The results are the same between a VM/container or container/container.
I got a little confused by this, and the only reason I can come up with, is that it cannot transfer internally when using different NICs.
Either that, or I have misconfigured something on the proxmox host or OPNsense router.
I hope someone can clarify these things for me a little. I don't seem to be able to find anything about this on the internet.
Thanks.