Just 1 VM per PXE host, and build up the HA between them (any network will do). Only problem with OPNsense HA, you need 3 uplink IP addresses, otherwise it will not work. With 1 single provider IP, this does not work.
@spetrillo
My setup is super simple, see attached HW I am using with Proxmox/OPNsense. I just map my 3 bridges to the VM. vmbr0 == lan, vmbr1=internet and vmbr2=internet2 (I have failover)
Is your i350 card the only network interface in the node?
I personally use the standard bridge option with OPNsense (so no hardware pass-through), this makes moving the VM easier to another node (bridge is generic) and performance impact bridge vs pass-through is not that much.
Check out the following information:
https://forum.proxmox.com/threads/intel-nuc-13-pro-thunderbolt-ring-network-ceph-cluster.131107/
It is using Thunderbolt networking, because you can reach 26+ Gbit/sec - but the concepts for networking does not differ between thunderbolt or RJ45/SFP (it is...
My setup was working fine until recently (not sure when it broke), I could easily access anything, the host and all VMs. Currently I can still access the VMs, but the Proxmox host cannot be accessed in any way from the local network. OPNsense is running as VM, so it can access the internet and...
I found a workaround, by editing the /etc/default/grub file (run update-grub :) ) and append to GRUB_CMDLINE_LINUX_DEFAULT the following:
pcie_port_pm=off pcie_aspm.policy=performance
I have a NUC13 with a i226v 2.5Gbit/sec NIC, running on 1Gbit/sec. Proxmox 8 is installed, with the latest kernel. My internet is 1Gbit/sec down/up. All testing is down directly on the Proxmox itself, no VM. On my other NUC8 the speed is always 1Gbit/sec down/up, whatever I do.
When I use scp...
No, not related. I tested via via USB passthrough and "usbip". With my Z-Wave I had too many drops/retries with passthrough and with "usbip" zero issues. That is why I moved fully to "usbip" (it is also more flexible).
I cancelled my original order, it continue to shift the date. I ordered on Monday from Mouser (EU) and it got shipped out on Monday and it arrived today :-) I will install it tonight in my second NUC13.
BTW - i did a bit more testing, but NUC8 TB3 <-> NUC13 TB4 is not really super reliable, so...
The only frustrating part is to automatic bind on start-up, it can be hacked into a service. I made another bash script which does it from, including status, logging, etc.
I uploaded it to my gist:
https://gist.github.com/ualex73/e6d6088120840a10e126d62fe4061079
I tried a build with Ceph, but did not get CephFS directly to work (gave up quickly, I know) ... but your steps are super helpful, so I will give it another try soon.
Correct, I am from the Netherlands. Good tip for mousser. If my current supplier cannot deliver it in a week orso, I will cancel and order there.
I am using "usbip" to connect the USB devices inside the Proxmox VMs, because USB device via the hardware is slow and not stable. Technically I can...
Yes, I found the 4000 MTU somewhere in the TB source code, hence the test around it.
I am almost happy with my setup, but I think will not build the Ceph storage in my end. I normally do not keep 3 nodes up-and-running, so there is no super benefit for me (my Home Assistant node is connected to...
@scyto I tested with MTU size on the thunderbolt, but it is better to configure 4000.
MTU 9000 gives:
Accepted connection from 10.0.0.2, port 32842
[ 5] local 10.0.0.3 port 5201 connected to 10.0.0.2 port 32852
[ ID] Interval Transfer Bitrate
[ 5] 0.00-1.00 sec 1.30 GBytes...
I am also running "ANRPL357.0027.2023.0607.1754 ", I updated all firmware before I began with the rest. What type of issues did you notice with the ethernet board?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.