Recent content by sherminator

  1. S

    Debian 13 guests freezing on reboot

    Ok, my guess with low RAM doesn't seem to be a that hot. Instead I recognized that all affected VMs a) the consoles are frozen even before I try to reboot the VMs b) have a lot of "[TTM] Buffer eviction failed" errors in their journals So I searched for the error mentioned above and found...
  2. S

    Debian 13 guests freezing on reboot

    Yes, I experienced the exact same issue in my (two node) homelab. My guess is: This happens every time I reboot a guest while the node is low on RAM. Maybe you can try again with this in mind?
  3. S

    Auto-Reply Messages Missing DKIM Signature – Ending Up in Spam

    We just changed this option from "Envelope" to "Header" - this resolved our issue (Gmail rejected the autoreplies from our Exchange server), and so far we can't see any disadvantages.
  4. S

    Auto-Reply Messages Missing DKIM Signature – Ending Up in Spam

    We're facing the same situation, so my question is: Is it safe to change "Signing Domain Source" from "Envelope" to "Header"? Are there any real world disadvantages?
  5. S

    [SOLVED] Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    I guess, rebooting into kernel 6.14.11-4-pve is the least effort workaround. That worked for us (and others).
  6. S

    [SOLVED] Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    Welcome to the party! In our case rebooting PBS into kernel 6.14.11-4-pve successfully workarounded the issue.
  7. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    thanks for this hint! It's a bunch of NVMe (Western Digital Ultrastar DC SN640). It took so long, because it was an expansion from about 60 TB to about 70 TB.
  8. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    My news on this: It worked like a charm! zpool attach <poolname> <raidlevel> <new disk as in /dev/disk/by-id> For example (with random disk id): zpool attach my-pool raidz2-0 nvme-WUS4EB076B7P3E3_B0626C3A The expanding and scrubbing took a lot of time, but the filesystem was usable during the...
  9. S

    [SOLVED] Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    Yes, we can. We also ran into this issue: 3 node PVE/Ceph cluster (8.4.14), dedicated PBS. After upgrading PBS to 4.1 backup tasks randomly slowed down, VMs freezed with 100 % CPU load. Aborting the backup tasks and stopping and starting the affected VMs brought us back to normal. So I just...
  10. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    yes, you're right, it's a single vdev. Of course more vdevs gives you better performance - and are more expensive when achieving the same level of redundancy. In real life we're quite happy with our backup storage performance. We write backups with about 1 GB/s, and we read (aka restore) backups...
  11. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    thanks! So I will go shopping and trying - and letting you know how it went.
  12. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    Hi there, does PBS 4 include a ZFS version that allows live expansion of a raidz2 pool with an additional disk? If so, has anyone successfully tried this yet? Thanks and greets Stephan
  13. S

    Slow memory leak in 6.8.12-13-pve

    Side note from a not affected setup: Our 3 node cluster (PVE/Ceph) is running PVE 8.4.x, last weekend (the gap in the chart below) we updated from kernel 6.8.12-11 to 6.8.12-13. Our Ceph network is built on Broadcom P425G NICs. Maybe that helps a little bit.
  14. S

    Proxmox Probleme Windows Gäste im Bereich Netzwerk

    Hi Markus, so ganz entfernt erinnert mich Deine Beschreibung an die Probleme, die wir zu Beginn unserer aktuellen PVE-Hardwaregeneration hatten. Wir haben an Windows-VMs, auf denen Anwendungen gestartet wurden, die auf der Dateifreigabe einer anderen VM liegen (so ist das halt mit unserem...
  15. S

    UPS Help! Power cut already 2 times

    I can recommend CyberPower. We run them in a couple of server and networking racks - no issues so far.