After using Proxmox for ~6 months in my homelab i am quite tempted to install my new OPNsense on top of Proxmox. 1M$ question: is this a good idea? (I don't expect a yes no answer as that doesn't exist).
Current setup:
Now a new Topton Core i5 1235U system with 6 x 2.5GBe nics will be used to host OPNsense. i will install 32 GB ram and and the same 2TB NMVE disk.
It would be rather easy to integrate this into the Proxmox cluster thus expanding the CEPH cluster to a 4th drive. I hope the nics can be passed trough directly but even virtual nics will do the job, I have a test OPNsense running now with virtual nics on the H3's which works really well (this nic is even shared with the management interface which won't be needed on the Topton as it has 6 nic's).
The big disadvantage is that if my cluster of H3's goes down (for instance power supply failure) the router will go down as well as quorum is lost. A single H3 node failure is no problem. Dual H3 node failure can be absorbed with a Qdevice (running on my Synology NAS for instance or one of the in the meantime unused rPi's). To go even further i could increase the votes number of the router node such that the H3 cluster could totally fail (sure if also other nodes fail even this doesn't work).
If I am not mistaken CEPH will also stop working with only a single OSD left in the cluster however. This could be overcome by installing the NMVE adapter and a second NMVE drive in the Topton (disadvantage i loose the 3x4 PCIe connection and am down to 1x4, alternative a SATA drive could do the trick as well). This will give CEPH 2 OSD's but still only a single monitor. If I understood things well this means that CEPH will go down and there is no "Qdevice" for CEPH as far as I know.
No internet means bad WAF factor. All other services are less critical in that sense .
Which way would you go (and why):
Current setup:
- 3 x Odroid H3 with each 32 GB Ram and 2 TB NMVE drives.
- Proxmox 8 installed on a small 32 GB partition.
- CEPH on the remaining almost 2 TB of the NMVE drives in the default 3/2 replicated mode.
- CEPH uses 1 of the 2.5GBe nics via a dedicated switch.
- The other 2.5 GBe nic hosts the management interface and all regular use of the systems.
- A Synology NAS houses the backups and all installation files.
Now a new Topton Core i5 1235U system with 6 x 2.5GBe nics will be used to host OPNsense. i will install 32 GB ram and and the same 2TB NMVE disk.
It would be rather easy to integrate this into the Proxmox cluster thus expanding the CEPH cluster to a 4th drive. I hope the nics can be passed trough directly but even virtual nics will do the job, I have a test OPNsense running now with virtual nics on the H3's which works really well (this nic is even shared with the management interface which won't be needed on the Topton as it has 6 nic's).
The big disadvantage is that if my cluster of H3's goes down (for instance power supply failure) the router will go down as well as quorum is lost. A single H3 node failure is no problem. Dual H3 node failure can be absorbed with a Qdevice (running on my Synology NAS for instance or one of the in the meantime unused rPi's). To go even further i could increase the votes number of the router node such that the H3 cluster could totally fail (sure if also other nodes fail even this doesn't work).
If I am not mistaken CEPH will also stop working with only a single OSD left in the cluster however. This could be overcome by installing the NMVE adapter and a second NMVE drive in the Topton (disadvantage i loose the 3x4 PCIe connection and am down to 1x4, alternative a SATA drive could do the trick as well). This will give CEPH 2 OSD's but still only a single monitor. If I understood things well this means that CEPH will go down and there is no "Qdevice" for CEPH as far as I know.
No internet means bad WAF factor. All other services are less critical in that sense .
Which way would you go (and why):
- Integrate in Proxmox thus expanding the cluster and adding storage space, reliability and computing power?
- Keep OPNsense standalone and have only 1 point of failure for the incoming internet?