root@proxmox1:/etc/frr# vtysh -c 'show bgp neighbor 192.168.8.14'
BGP neighbor is 192.168.8.14, remote AS 65000, local AS 65000, internal link
Local Role: undefined
Remote Role: undefined
Member of peer-group VTEP for session parameters...
Too fast with sending the post - sorry.
Additionally, could you post your SDN config:
cat /etc/pve/sdn/controllers.cfg
cat /etc/pve/sdn/zones.cfg
Plus the generated FRR config on proxmox1 / proxmox2:
cat /etc/frr/frr.conf
That usually indicates that there is a mismatch in the capabilities/... advertised in the BGP HELLO message.
Could you post the output of:
vtysh -c 'show bgp neighbor 192.168.8.14'
On proxmox1.
Yep, I think you identified the issue. My 2 x HD that wouldn't spin down are an unmounted (backup) zfs mirror that are set with hdparm -Y to sleep. Every ~10 mins would wake up with the new kernel using zfs 2.3.5. See...
I have a cluster of 3 nodes but only 2 participate in a simple EVPN. I want at least one VM to ping another VM on the other proxmox node. I don't even use SNAT or anything additional.
I am on Proxmox 8.4 and the FRR package in its latest...
If you need to use the same subnet multiple times, you need to utilize VRFs to separate them on the PVE host. This functionality is currently not implemented for Simple Zones. When using NAT this way, you'd also need a way of discerning return...
Hi.
Would you mind sharing how this issue was resolved ?
Thanks.
EDIT: Solved it to, by checking: https://forum.proxmox.com/threads/rbd-error-rbd-listing-images-failed-2-no-such-file-or-directory-500.56577/
rbd ls -l <your ceph pool>
showed...
There is patch on the devel mailing list for using the proxmox-auto-install-assistant to get an initrd, vmlinuz and an example config for ipxe booting, aside of the iso: https://lore.proxmox.com/all/20260204121025.630269-1-c.heiss@proxmox.com/...
Hello,
I'm trying to deploy multiple isolated instances of the same VM scenario in Proxmox using SDN zones, but I'm running into a problem when enabling SNAT for internet access.
What works:
Multiple SDN Simple zones, each containing identical...
not super sure about the 'no installation candidate' issue, what does
apt-cache policy nvidia-driver
show?
here it's like this:
nvidia-driver:
Installed: (none)
Candidate: 550.163.01-2
Version table:
550.163.01-2 500
500...
Ich kann dem nur zu 80% zustimmen. Wo genau steht denn, dass man kein Hardware Raid mit ZFS nutzen darf. Es wird empfohlen es nicht zu tun, weil Leute die unbedarft danach Fragen keine Ahnung von ZFS haben und oft auch nicht von ihrem Raid...
I am using ZFS for running the node, which consists of two NVME-disks in RAID1 configuration. The Universe storage resides on three SSDs using LVM thin for VMs and LXCs and LVM for data storage using OMV
The issue at hand is that the VG pve is...
So in fact - no md raid is showing in the PVE host (cloud).
(Although we don't know if this was freshly booted without the omaya-cloud VM100 being started).
Things that I see in your output that don't seem right:
1. In the lsblk output, no...
the patch is already applied, so it will be included in the next bump for qemu-server
when you manually apply patches you have to reload/restart pveproxy & pvedaemon, so they load the perl libraries again