Hello everyone,
I'm looking to get a new firewall and I'm looking for advice on how to deal with certain aspects. Attached is a global view of what I want to do.
My current concern is how to get good throughput and still good isolation for the VMs I start on proxmox like (C) and (D) or others not in the graph, and the clients like (F) or (G).
Meaning F would get full 1gb/s to C all the while G gets 1Gb/s to D or another one similar. But C should not be able to talk to D without the traffic going through pfsense so I insure my fw rules apply.
I assume I will need openvswitch to handle all the virtual NICs in this picture, I have drawn a dedicated virtual NIC in the pfsense for each VM, but that would also mean a dedicated vswitch for each VM am I right?
Since I believe the virtual NIC has 10gb/s potential throughput, I could use a single Vswitch for all VMs and a single vNIC on the pfsense VM, but in that case can I prevent a VM to talk to other VMs on the vswitch without before having passed through pfsense? (ie. prevent direct C to D and forcing C to B to D)
Keeping in mind that the VMs(C, D, ...) and clients (F,G,...) should be on the same subnet, I see no solution with a single vswitch... Any downsides to having many vswitches ? ( I fear all openvswitch config is config file based and not GUI... I can handle it but would prefer a nice GUI as always )
I'm also a bit weary of running non critical VMs on the hardware that will host my main router and switch, mainly since I fear I will have to reboot the host proxmox to deal with an issue or config of some low-importance VM, putting the whole networking down when I do this.
Any reassurance on the main usual reasons to reboot proxmox host are welcome
All other advice on this setup are also welcome.
Just in case, the CPU of the host will be core i5 8th gen with 16GB RAM with 6 Intel 1Gb/s physical NICs, so I expect actually good performance and hopefully over 1Gb/s of routing performance. I actually think it is overkill for a router so that's why I'm thinking of hosting other services on this host.
Also, I'm thinking of not dedicating any physical NIC to proxmox and instead pass all 6 physical NICs to pfsense, only keeping a virtual NIC between pfsense and proxmox for proxmox management. Any advice on this? I fear if pfsense is stuck in a boot loop I would actually loose any management of proxmox besides plugging a screen and usb keyboard in the host. But even with that, I would only get console access and no nice http GUI for proxmox, right? Any way to auto-detect that a VM like pfsense is acting up and pull the phisical NIC back to proxmox automatically to ensure management?
Thanks you in advance for your kind help!
Regards,
Toxic.
I'm looking to get a new firewall and I'm looking for advice on how to deal with certain aspects. Attached is a global view of what I want to do.
My current concern is how to get good throughput and still good isolation for the VMs I start on proxmox like (C) and (D) or others not in the graph, and the clients like (F) or (G).
Meaning F would get full 1gb/s to C all the while G gets 1Gb/s to D or another one similar. But C should not be able to talk to D without the traffic going through pfsense so I insure my fw rules apply.
I assume I will need openvswitch to handle all the virtual NICs in this picture, I have drawn a dedicated virtual NIC in the pfsense for each VM, but that would also mean a dedicated vswitch for each VM am I right?
Since I believe the virtual NIC has 10gb/s potential throughput, I could use a single Vswitch for all VMs and a single vNIC on the pfsense VM, but in that case can I prevent a VM to talk to other VMs on the vswitch without before having passed through pfsense? (ie. prevent direct C to D and forcing C to B to D)
Keeping in mind that the VMs(C, D, ...) and clients (F,G,...) should be on the same subnet, I see no solution with a single vswitch... Any downsides to having many vswitches ? ( I fear all openvswitch config is config file based and not GUI... I can handle it but would prefer a nice GUI as always )
I'm also a bit weary of running non critical VMs on the hardware that will host my main router and switch, mainly since I fear I will have to reboot the host proxmox to deal with an issue or config of some low-importance VM, putting the whole networking down when I do this.
Any reassurance on the main usual reasons to reboot proxmox host are welcome
All other advice on this setup are also welcome.
Just in case, the CPU of the host will be core i5 8th gen with 16GB RAM with 6 Intel 1Gb/s physical NICs, so I expect actually good performance and hopefully over 1Gb/s of routing performance. I actually think it is overkill for a router so that's why I'm thinking of hosting other services on this host.
Also, I'm thinking of not dedicating any physical NIC to proxmox and instead pass all 6 physical NICs to pfsense, only keeping a virtual NIC between pfsense and proxmox for proxmox management. Any advice on this? I fear if pfsense is stuck in a boot loop I would actually loose any management of proxmox besides plugging a screen and usb keyboard in the host. But even with that, I would only get console access and no nice http GUI for proxmox, right? Any way to auto-detect that a VM like pfsense is acting up and pull the phisical NIC back to proxmox automatically to ensure management?
Thanks you in advance for your kind help!
Regards,
Toxic.