If the OP want cross node VM communication on a separate private network range then SDN -> Zones -> VXLAN will work but there is (currently, apparently) no DHCP + IPAM + DNS facility available.
I just set up a Simple network and have DHCP and DNS working via dnsmasq, pve IPAM and PowerDNS for...
I have this mostly working so that when I create a new VM/CT it gets assigned a new IP via DHCP and an A record for the hostname (+ domain) and the assigned IP gets inserted into the pdns sqlite database. Great, that was a hard won battle with the help of Claude.ai. However, the PTR record can't...
Thanks to Chris and VictorSTS for the explanation of how GC works. I just did another manual GC run after a remote sync plus a prune. This time was indeed 7h 23m, with some index files taking up to an hour, while most were a few seconds. This run got to 97% of phase1 in under 5 mins, then really...
This particular example was not too bad, only ~3 hours for the entire GC run, whereas previous runs would take 5 to 7 hours and always with many hours spent on the last dozen index files. I just did a GC on a similar sized Datastore on a 4TB NVMe in an MS-01, and it took less than 5 minutes! I...
I'm running a Proxmox Backup Server on my Terramaster F2-423 with 32GB of RAM. I've got two NVMe drives in the system - one for the Proxmox VE host and the other for Ceph storage. My main storage consists of two 8TB Seagate CRM drives in a ZFS mirror configuration. I've passed these through to a...
Congratulations, well done. So after a month, is your VM still working okay? Any gotchas or points you would add to the setup procedure?
FWIW, so far, I've made a start on translating the gangqizai/igd repository, so I can more easily follow along. I intend to update the README and repo as I...
I run Proxmox Back Server inside PVE LXC containers on all my PVE cluster nodes, and wondering if it would be possible to do the same for Proxmox VE inside an Incus LXC container?
I'm about to test running Proxmox in an Incus VM on an Archlinux variant, but it occurred to me that if I need to...
Mine seems fine with no crashes or lockups. Up 10 days atm and just checked dmesg output with nothing dramatic there either. Uptodate packages with pve-firmware 3.11 and proxmox-kernel-6.8 6.8.4-3. It's under reasonable load. I have 1TB in the 3x2 slot for the PVE host, a 2TB in the 3x4 slot for...
Apologies for the bump but there should be 100+'s more MS-01s in the hands of Proxmox users by now and I am still hoping to hear of any success by anyone trying iGPU FULL passthrough with any MS-01?
_gabriel, my i915 was blacklisted on the host. FWIW.
Apologies for necrobumping but I noticed the magic words "bgp-evpn". I have an EVPN-VXLAN example working (thanks to your help) but I am still struggling to get any kind of BGP layer working to provide multiple exit nodes working in said example. Would you happen to know of any step by step...
I can't mark this is as a solution because I have no idea why the installation was copying the "backups" folder when I specifically excluded "Backup" from the CT mountpoint. However, I wiped the disk and started again, and now the "backups" folder is not included in the actual backup of this...
I have two similar PBS LXC containers on different PVE hosts, and they both use a mountpoint from a local zpool on a separate disk(s). One of them does the right thing and avoids including the configured backup directory for itself when backed up to its own PBS instance. They both have the same...
Glad to hear that how-I-did-it worked for someone else. I'm still trying to figure out BGP exit nodes and DHCP assignments, so I haven't even tried multiple vnets yet, so if I come across some way to confine traffic in this EVPN-VXLAN context I'll report back.
Sorry, I'm not sure about that example, as I have not tried VXLAN-only. Keep in mind that VXLAN will not let you "escape" the defined /24 network, ie; you can't ping the parent host or outside world. For that, you need to use EVPN + VXLAN as per my example. I would be interested to know how you...
If you want to have a "private" /24 network working between nodes, then you may need to consider using the EVPN (+ VXLAN) option. If so, then have a look at my example here...
https://forum.proxmox.com/threads/inter-node-sdn-networking-using-evpn-vxlan.146266/post-660975
So a small improvement seems to be adding "nomodeset i915.force_probe=4680" to the guest CachyOS systemd-boot system. Now the above i915 kernel crash has dissapeared!
My Minisforum MS-01 finally arrived after 4 months so I've spent nearly 24 hous trying to see if I can get iGPU passthrough to work, no luck so far. I'll detail my settings and results so far and see if anyone can comment. Hopefully this thread will end up with a working example that can apply...
Thanks to spirit, I managed to get this EVPN-VXLAN system to work so here is a how-I-did-it.
Assuming a router with a LAN IP of 192.168.1.1 and 3x Proxmox VE host nodes as pve1 (192.168.1.21), pve2 (192.168.1.22) and pve3 (192.168.1.23) including a number of VM/CTs in an internal 10.1.1.0/24...
Thanks yet again. I removed my initial tests and started from scratch again following your guide, with a few other google hints, and I have something that is working as I expect BUT only for ICMP. I can ping east-west and north-south (well, north at least) successfully but dig (UDP) and curl...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.