Latest activity

  • F
    Hello, we are using a relatively simple ospf config similar to this: https://packetpushers.net/blog/proxmox-ceph-full-mesh-hci-cluster-w-dynamic-routing/ We use ospfd and ospfd6 /etc/frr/frr.conf.local looks like this: frr version 10.4.1 frr...
  • P
    I've verified that after some time now everything looks fine. Probably it has been slow to gather every information from the servers. I'll mark solved.
  • Z
    ze42 replied to the thread evpn? network segmentation?.
    Hypervisor already knows subnets, and is configured per vrf/zone to have a gateway IP, and route the trafic. I just want to add a static route to each VRF. I would much rather just add the static route somewhere, than have some tenant VM have...
  • Stoiko Ivanov
    See https://forum.proxmox.com/threads/detected-undelivered-mail-to.180292/post-840426 for a bit more details - else this change is due to: https://forum.proxmox.com/threads/proxmox-mail-gateway-security-advisories.149333/post-838667 and you can...
  • Stoiko Ivanov
    The change is related to a potential security issue addressed in: https://forum.proxmox.com/threads/proxmox-mail-gateway-security-advisories.149333/post-838667 You can selectively allow broken mails by adding rules that match on the added...
  • E
    Hello! Total noob here. Didn't find a solution on the forum(maybe user error). I have installed Ubuntu Server on proxmox and for my life I can't sort out the qemu agent. Keeps failing. All packages are updated, agent installed on vm, enabled on...
    • Screenshot 2026-02-26 at 09.53.02.png
    • Screenshot 2026-02-26 at 09.53.15.png
    • Screenshot 2026-02-26 at 09.57.33.png
    • Screenshot 2026-02-26 at 10.08.03.png
  • L
    Ich hatte das bei Ubuntu genommen: https://cloud-images.ubuntu.com/noble/current/noble-server-cloudimg-amd64.img (Ist das die richtige) "root@ubuntu-cloudinit:~# systemctl restart sshdFailed to restart sshd.service: Unit sshd.service not found."...
  • B
    Bu66as replied to the thread Proxmox für 500-1000 VMs.
    @Stefan123, wenn ihr ein dediziertes Ceph-Cluster als Storage nutzt, laufen auf den Compute-Nodes keine Ceph-OSDs. Die lokalen Platten in den Nodes brauchen dann nur das Proxmox-OS und ggf. ISOs/Templates — dafür sind SAS-SSDs völlig ausreichend...
  • L
    @LucasKer, Wie @Johannes S schon sagt — zentrale Sammlung gibt es nicht. Hier die direkten Links für deine Distros: Debian: https://cloud.debian.org/images/cloud/ (jeweils latest/ Unterordner) Ubuntu: https://cloud-images.ubuntu.com/ (z.B...
  • H
    Same problem here. Since 2 days our mail gateway reports many many errors like ... message has ambiguous content - adding header ... said: 450 4.1.8 <double-bounce@mail4.localdomain>: Sender address rejected: Domain not found (in reply to...
  • F
    Hi everybody and thanks for this usefull topic. The workaroud worked for me so well. But that solution makes another kind of issue for me. We have a cluster of 3 nodes, and this happens in only one. We aplied the fix, node works fine with...
  • O
    We have same thing. I thought it is because PDM is behind NAT.
  • V
    Danke @Bua66as
  • B
    bl1mp replied to the thread Proxmox für 500-1000 VMs.
    Es kommt darauf an. :) und zwar auf das Netzwerksetup das ihr am Ende fahrt. Die Geschwindigkeiten im Netzwerk (Bonding/LACP) für das Ceph sollten mit den IO-Geschwindigkeiten der Disks passen (in Abhängigkeit zur Art und Anzahl der Disks). Wenn...
  • P
    That's a nice summary I agree and reclaimng space works fine on NFS 4.2, with qcow2 EVEN without any qemu-img convert - it works until you move disk to different datastore. After moving disk, qm monitor shows "driver": "zeroinit" (of moved disk)...
  • dakralex
    Hi! There is no simple way to move VMs and CTs under a single root cgroup as for VMs the code already depends on the VM's cgroups to be under the /qemu.slice cgroup and the /lxc cgroup is given by LXC itself, which has many intricacies that make...
  • B
    Bu66as replied to the thread Proxmox für 500-1000 VMs.
    @Stefan123, wenn ihr ein dediziertes Ceph-Cluster nutzt, laufen auf den Compute-Nodes keine Storage-I/Os für die VMs — die lokalen Platten dienen dort nur als Boot-/OS-Laufwerke für Proxmox selbst. Dafür reichen SAS-SSDs locker, selbst SATA-SSDs...
  • A
    Thank you, that was quick! When/if it arrives to test I'll be able to test it in our test cluster.
  • ggoller
    You should make sure to give static ips (in the same subnet as the ceph network (check the ceph config file for this)) to the interfaces attached to the switch. Then just remove the frr config and restart frr to apply the change.
  • S
    Stefan123 replied to the thread Proxmox für 500-1000 VMs.
    Eine kurze Frage hätte ich tatsählich. Wir schauen gerade grob nach HW-Preisen. Wenn wir ein zentrales NVME Ceph-Storage hätten, ist die Geschwindigkeit der Platten in den Nodes relavant. Also könnte man hier auch SAS-SDDs statt NVME nutzen ohne...