New Proxmox Setup with OPNSense Advice needed

chinyongcy

New Member
Apr 10, 2023
10
0
1
Not sure if this is the right place to post this... moderator do let me know if this is not appropriate.


I have ordered a CPU N100, 16GB DDR5 Ram, 118GB P1600X Intel Optane SSD, 4 Ports Intel i226.

Currently I also have a 3 Nodes Clusters. With only 2 Nodes running at a time, 1 running my "production stuff", some media server and sites. with the 2nd one running my development stuff and monitoring stack (just started on this, influxdb2). The third node i only spin up when I need more resources or vm to do some testing. My house network is also linked up with 2.5Gbps switch, internet plan is 1Gbps symmetrical, with plan to upgrade to 2.5Gbps by end of this year.

The plan/thought process:
  1. I should not add my new N100 proxmox host to the cluster since it will then be an even number which I understand is not a good practice.

  2. Though I have a single drive I should I still go for ZFS or LVM-Thin is fine as well? Understand that ZFS still have some benefit. Is it worth it?

  3. As for the OPNSense VM the only problem I have wrapping my head around is how should I plan the NIC ports. Should I:
    • Pass-through 1 x Port for WAN (vmbr1) to OPNSense. The Other 3 Ports I can use as Proxmox Bridge (vmbr0).
    • Pass-through 2 x Ports (1 for WAN and 1 for LAN) since I will be connecting to my switch as well. Thinking about passthrough as I think it should have better performance theoretically. But then I will need attach another vNIC to and bridge the rest of the two or one ports to connect to my switch. Will this be more recommended as compared to the above one?
  4. Since N100 is a fairly strong CPU... I am thinking of setting up an IDS/IPS/NGFW (tbh I am not 100% sure they are the same thing, thus get the N100 to learn more on firewall etc to beef up the security). Any recommendations on which one should I go for? Suricata vs Snort vs Zenarmor? or what are you using now? With all this is it strong enough to handle a 2.5Gbps network?
  5. Also since this is my only system with the Optane drive means longevity is better I am thinking of moving my logging stack vm into this system.
  6. I will also move 1 of my 2 instances of PiHole to this. Dedicating this box for all networking stuff.
  7. Now I am at one point where I think 118GB might not be enough for all these stuff. I only have an extra sata port available. So If i were to utilise the sata port. Should I install Proxmox on my SATA SSD then VM on the nvme Optane drive that I have? Since I am thinking influxdb will benefit from the Optane drive.
Getting a bit indecisive as there are many ways of doing things. Just wanna hear the community thoughts and the best practice.
Thank you very much.
 
Regarding the NIC setup of your OPNSense VM… from a security standpoint, it's always best to have dedicated (passthrough) ports to your guest. I would also never even think about having my WAN patched directly to a bridge, because that way all the WAN taffic hits your host directly. I don't know, of you actually can passthrough only one of 4 ports of a pci device, but I don't think so, so going with a separate NIC altogether is the best choice here.

I doub't that having passthrough NICs will boost your WAN performance, but maybe depending on your traffic, you might benefit from some offloading, when using passthrough NICs. However, to me, this is only a security concern.

Odd/Even number of cluster nodes only matter, if you're running a stretched cluster, spanning two or more physical locations. In that case, a disruption of communications between the "parts" of the cluster will come into play and usually the biggest cohort survives, which you don't have, should you be running an equal number of hosts at each end. In such a case, you can simply bring in a simple voting qdevice, which will ensure the survival of the side, the qdevice is connected to. However, as long as all your hosts are located on the same network, you should be fine with an even number of nodes, if they all connect to the same network/switch.
 
Regarding the NIC setup of your OPNSense VM… from a security standpoint, it's always best to have dedicated (passthrough) ports to your guest. I would also never even think about having my WAN patched directly to a bridge, because that way all the WAN taffic hits your host directly. I don't know, of you actually can passthrough only one of 4 ports of a pci device, but I don't think so, so going with a separate NIC altogether is the best choice here.

I doub't that having passthrough NICs will boost your WAN performance, but maybe depending on your traffic, you might benefit from some offloading, when using passthrough NICs. However, to me, this is only a security concern.

Odd/Even number of cluster nodes only matter, if you're running a stretched cluster, spanning two or more physical locations. In that case, a disruption of communications between the "parts" of the cluster will come into play and usually the biggest cohort survives, which you don't have, should you be running an equal number of hosts at each end. In such a case, you can simply bring in a simple voting qdevice, which will ensure the survival of the side, the qdevice is connected to. However, as long as all your hosts are located on the same network, you should be fine with an even number of nodes, if they all connect to the same network/switch.
Now that you mentioned, I got a feeling I might not to pass the each individual ports to guest. Got a feeling that the 4 ports will be seen as a single pcie device. Seems like I am only able to to bridging on the proxmox side and then create vNIC for the OPNSense for both WAN and LAN. Will update here once I get the device.

In that case I will leave the performance aside first. Trying to learn the security part here, what could happened in terms of security if the WAN port is bridged?

Oh to take note, as of now I am not using any HA. The only thing that i think will bother me if i shutdown my 3rd node. I am afraid that I will not be able to start/stop any VM due to fencing. Same as if I only have 2 nodes previously, if I shutdown 1 of the node, I will not be able to do much on the first node :( Yup all nodes are on the same network and share the same switch.
 
Well, I am no hacker, but I'd guess that e.g. broadcasts would make it to and from the bride which poses an information leak to the outside world. The bridge will expose that kind of traffic to the internet, which is never a good thing. The issue is, that any traffic from the internet hits your V;M host directly without being screened or pruned. Malicous traffic would be able to hit the host, before being blocked by the firewall, which is also connected to the bridge. However, a bit of ARP poisioning can go a long way. If you can, get a dedicated DualPort NIC and configure IOMMU to be able to passthrough that NIC to your OPNsense VM.

Fencing is used to decide what to do with a host, or a group of hosts, which have lost contact to other cluster members. General consensus is to have 3 nodes, such as that if one node looses connect the remaining to form the greater cohort and survive, while the single node will be fenced. The issue arises, of you do have a situation, where you do have an equal number of hosts and a means of cutting those two halves from one another, say a dedicated link between to co-locations. Since the number of surviving hosts in each cohort is equal, all nodes would get fenced.
Another example would be, if your switch died and all three of your nodes disconnect from each other, you'd get 3 cohorts of 1 each, which also would cause all your nodes to fence (themselves) in the hope to re-acquire the connection on reboot again.
 
  • Like
Reactions: chinyongcy
Well, I am no hacker, but I'd guess that e.g. broadcasts would make it to and from the bride which poses an information leak to the outside world. The bridge will expose that kind of traffic to the internet, which is never a good thing. The issue is, that any traffic from the internet hits your V;M host directly without being screened or pruned. Malicous traffic would be able to hit the host, before being blocked by the firewall, which is also connected to the bridge. However, a bit of ARP poisioning can go a long way. If you can, get a dedicated DualPort NIC and configure IOMMU to be able to passthrough that NIC to your OPNsense VM.

Fencing is used to decide what to do with a host, or a group of hosts, which have lost contact to other cluster members. General consensus is to have 3 nodes, such as that if one node looses connect the remaining to form the greater cohort and survive, while the single node will be fenced. The issue arises, of you do have a situation, where you do have an equal number of hosts and a means of cutting those two halves from one another, say a dedicated link between to co-locations. Since the number of surviving hosts in each cohort is equal, all nodes would get fenced.
Another example would be, if your switch died and all three of your nodes disconnect from each other, you'd get 3 cohorts of 1 each, which also would cause all your nodes to fence (themselves) in the hope to re-acquire the connection on reboot again.
That is my initial plan to use pass through. But I am not sure about the system I get whether the 4 ports are in a single pcie or not. Will have to wait till it arrives and try out.

Understand, I think for my "Firewall" proxmox i will not add it to the node then. It should reduce complexity and thinking it might be good to separate it out as well.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!