Hi @bbx1_,
Your output shows the core problem: the frr.service was inactive.
The status output also shows the service is disabled (Loaded: ...; disabled; ...), which is why it did not start automatically after you rebooted the nodes.
You...
Die RAID-Z-Erweiterung inklusive Rebalancing ist zwar Teil von OpenZFS 2.2, wurde aber als "Technology Preview" klassifiziert und ist standardmäßig deaktiviert. Proxmox hat diese experimentellen Funktionen in den pve-Paketen nicht standardmäßig...
This isn't configured in the restore wizard itself, but on the backup repository. You don't need to recreate the workers.
The server that mounts the backup for a file-level restore (the Gateway Server) determines which virtualization platform it...
Hello @eisvleo,
For scripting without a password prompt, you should use API tokens. Create a new user in PBS with the necessary permissions on the datastore (e.g., the DatastoreBackup role) and generate an API token for it. You can then define...
Hello @iprigger,
to verify what @Stoiko Ivanov mentioned, could you please check the raw source of the report email? This would show whether a MIME part with Content-Type: text/html is actually included or if it's missing entirely.
Ich hab sowas vermutet ... ;-)
OK, danke. Wir checken die VMs dort drauf noch mal durch in der GUI.
Zum Script hat es noch nicht gereicht auf die Schnelle.
Dass die VMs auch mit aio=io_uring auf dem neuen Kernel erst einmal ohne sichtbare Fehler starten, ist trügerisch. Das Problem sind potenzielle Deadlocks oder Datenkorruption unter I/O-Last, die nicht sofort auftreten müssen.
Wie von @gurubert...
Hello @BvE,
your assumption is correct. After you set issue_discards = 1 in /etc/lvm/lvm.conf, you can apply the changes without a reboot by running vgchange --refresh.
Hello @bacbat32,
While @SteveITS is right that WAL/DB can contribute to the raw usage, the 1.8 TiB of STORED data seems to be caused by the ~900k objects that your initial rados -p ceph-hdd ls command showed. Since rbd ls is empty for that pool...
Hello @jnthans,
glad the plugin update worked for you.
Regarding the incorrect Hyper-V helper issue that @AndreasS mentioned: This usually points to the Backup Proxy configuration. It's worth checking which proxy is set for the Proxmox...
Hello @kobold81,
could you please post the configuration file of your VM? You can find it at /etc/pve/qemux-server/VMID.conf (replace VMID with the actual ID of your virtual machine). It's possible that the disk controller type (e.g., SATA...
Hello @bacbat32,
While @SteveITS is right that WAL/DB can contribute to the raw usage, the 1.8 TiB of STORED data seems to be caused by the ~900k objects that your initial rados -p ceph-hdd ls command showed. Since rbd ls is empty for that pool...
Hello @pvpaulo,
@shanreich is on the right track. The different configurations are almost certainly caused by the vlan-aware flag on your vmbr0 on node PVE01.
If that bridge is set as VLAN-aware in /etc/network/interfaces, SDN correctly omits...
Hello @alexandere,
the method shown by @Moayad using pct set <CTID> --nameserver <IP> is the correct way to make the DNS setting permanent. Normally, the DNS server should be automatically adopted from the host settings during the container's...
Dein Verständnis ist absolut korrekt.
Die Einstellung aio=threads ist eine Option pro virtueller Festplatte in der jeweiligen VM-Konfiguration (z.B. scsi0: dein-storage:100,aio=threads). Du kannst diese Option auch auf den Nodes mit dem alten...
@Johannes S, du hast mich auf den richtigen Weg gebracht, die Wartbarkeit der Bare-Metal-Installationen genauer zu betrachten. Dein Einwand bezüglich der komplexen, manuellen Updates ist bei klassischen Bare-Metal-Installationen absolut...
@Trickreich,
ja, die Installation von neueren Test-Kerneln ist möglich. Du kannst dafür das pve-test Repository aktivieren. Erstelle die Datei /etc/apt/sources.list.d/pve-test.list und füge folgende Zeile hinzu:
deb...