We managed to fix this, by migrating our custom local config to the built-in OSPF feature.
Leaving us with the /etc/frr/frr.conf.local being only used for ospfd6 (and lo):
frr version 10.4.1
frr defaults datacenter
hostname ffac-epyc-03
log...
Hello, we are using a relatively simple ospf config similar to this:
https://packetpushers.net/blog/proxmox-ceph-full-mesh-hci-cluster-w-dynamic-routing/
We use ospfd and ospfd6
/etc/frr/frr.conf.local looks like this:
frr version 10.4.1
frr...
I've verified that after some time now everything looks fine.
Probably it has been slow to gather every information from the servers.
I'll mark solved.
Hypervisor already knows subnets, and is configured per vrf/zone to have a gateway IP, and route the trafic.
I just want to add a static route to each VRF.
I would much rather just add the static route somewhere, than have some tenant VM have...
See https://forum.proxmox.com/threads/detected-undelivered-mail-to.180292/post-840426 for a bit more details - else this change is due to:
https://forum.proxmox.com/threads/proxmox-mail-gateway-security-advisories.149333/post-838667
and you can...
The change is related to a potential security issue addressed in:
https://forum.proxmox.com/threads/proxmox-mail-gateway-security-advisories.149333/post-838667
You can selectively allow broken mails by adding rules that match on the added...
Hello! Total noob here. Didn't find a solution on the forum(maybe user error). I have installed Ubuntu Server on proxmox and for my life I can't sort out the qemu agent. Keeps failing. All packages are updated, agent installed on vm, enabled on...
Ich hatte das bei Ubuntu genommen: https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img (Ist das die richtige)
"root@ubuntu-cloudinit:~# systemctl restart sshdFailed to restart sshd.service: Unit sshd.service not found."...
@Stefan123, wenn ihr ein dediziertes Ceph-Cluster als Storage nutzt, laufen auf den Compute-Nodes keine Ceph-OSDs. Die lokalen Platten in den Nodes brauchen dann nur das Proxmox-OS und ggf. ISOs/Templates — dafür sind SAS-SSDs völlig ausreichend...
@LucasKer,
Wie @Johannes S schon sagt — zentrale Sammlung gibt es nicht. Hier die direkten Links für deine Distros:
Debian: https://cloud.debian.org/images/cloud/ (jeweils latest/ Unterordner)
Ubuntu: https://cloud-images.ubuntu.com/ (z.B...
Same problem here. Since 2 days our mail gateway reports many many errors like ...
message has ambiguous content - adding header
... said: 450 4.1.8 <double-bounce@mail4.localdomain>: Sender address rejected: Domain not found (in reply to...
Hi everybody and thanks for this usefull topic.
The workaroud worked for me so well.
But that solution makes another kind of issue for me.
We have a cluster of 3 nodes, and this happens in only one. We aplied the fix, node works fine with...
Es kommt darauf an. :)
und zwar auf das Netzwerksetup das ihr am Ende fahrt. Die Geschwindigkeiten im Netzwerk (Bonding/LACP) für das Ceph sollten mit den IO-Geschwindigkeiten der Disks passen (in Abhängigkeit zur Art und Anzahl der Disks).
Wenn...
That's a nice summary I agree and reclaimng space works fine on NFS 4.2, with qcow2 EVEN without any qemu-img convert - it works until you move disk to different datastore.
After moving disk, qm monitor shows "driver": "zeroinit" (of moved disk)...
Hi!
There is no simple way to move VMs and CTs under a single root cgroup as for VMs the code already depends on the VM's cgroups to be under the /qemu.slice cgroup and the /lxc cgroup is given by LXC itself, which has many intricacies that make...
@Stefan123, wenn ihr ein dediziertes Ceph-Cluster nutzt, laufen auf den Compute-Nodes keine Storage-I/Os für die VMs — die lokalen Platten dienen dort nur als Boot-/OS-Laufwerke für Proxmox selbst. Dafür reichen SAS-SSDs locker, selbst SATA-SSDs...
You should make sure to give static ips (in the same subnet as the ceph network (check the ceph config file for this)) to the interfaces attached to the switch. Then just remove the frr config and restart frr to apply the change.