Recent content by sherminator

  1. S

    Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    Welcome to the party! In our case rebooting PBS into kernel 6.14.11-4-pve successfully workarounded the issue.
  2. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    thanks for this hint! It's a bunch of NVMe (Western Digital Ultrastar DC SN640). It took so long, because it was an expansion from about 60 TB to about 70 TB.
  3. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    My news on this: It worked like a charm! zpool attach <poolname> <raidlevel> <new disk as in /dev/disk/by-id> For example (with random disk id): zpool attach my-pool raidz2-0 nvme-WUS4EB076B7P3E3_B0626C3A The expanding and scrubbing took a lot of time, but the filesystem was usable during the...
  4. S

    Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    Yes, we can. We also ran into this issue: 3 node PVE/Ceph cluster (8.4.14), dedicated PBS. After upgrading PBS to 4.1 backup tasks randomly slowed down, VMs freezed with 100 % CPU load. Aborting the backup tasks and stopping and starting the affected VMs brought us back to normal. So I just...
  5. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    yes, you're right, it's a single vdev. Of course more vdevs gives you better performance - and are more expensive when achieving the same level of redundancy. In real life we're quite happy with our backup storage performance. We write backups with about 1 GB/s, and we read (aka restore) backups...
  6. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    thanks! So I will go shopping and trying - and letting you know how it went.
  7. S

    [SOLVED] PBS 4: ZFS raidz2 - live expansion possible?

    Hi there, does PBS 4 include a ZFS version that allows live expansion of a raidz2 pool with an additional disk? If so, has anyone successfully tried this yet? Thanks and greets Stephan
  8. S

    Slow memory leak in 6.8.12-13-pve

    Side note from a not affected setup: Our 3 node cluster (PVE/Ceph) is running PVE 8.4.x, last weekend (the gap in the chart below) we updated from kernel 6.8.12-11 to 6.8.12-13. Our Ceph network is built on Broadcom P425G NICs. Maybe that helps a little bit.
  9. S

    Proxmox Probleme Windows Gäste im Bereich Netzwerk

    Hi Markus, so ganz entfernt erinnert mich Deine Beschreibung an die Probleme, die wir zu Beginn unserer aktuellen PVE-Hardwaregeneration hatten. Wir haben an Windows-VMs, auf denen Anwendungen gestartet wurden, die auf der Dateifreigabe einer anderen VM liegen (so ist das halt mit unserem...
  10. S

    UPS Help! Power cut already 2 times

    I can recommend CyberPower. We run them in a couple of server and networking racks - no issues so far.
  11. S

    Accessing internet from Proxmox

    To understand your setting better: The connection from your PC to the Proxmox WebGUI is working? The OPNSense VM has two virtual NICs? Its "WAN" is connected to your WAN bridge, and its "LAN" is connected to... which bridge?
  12. S

    Netzwerkkarte wird nicht erkannt in der GUI

    Hi, ip addr sollte Dir alle NICs anzeigen, die das Betriebssystem erkannt hat.
  13. S

    [TUTORIAL] Broadcom NICs down after PVE 8.2 (Kernel 6.8)

    I would like to share my today's observations with this: I just updated some P425G and after a server reboot everything looks as expected: Active Package version : 230.1.116.0 Package version on NVM : 230.1.116.0 Firmware version : 230.0.156.0 But on a P225G a reboot...
  14. S

    Many TCP Retransmissions and TCP Dup ACKs: Wrong link aggregation configuration?

    Thanks for your reply! Hm, (R)STP is configured on all switches, and referring to our bandwidth monitoring there is no loop between the switches. What can I do to debug this? But I feel this is not a Proxmox issue anymore... o_O
  15. S

    Many TCP Retransmissions and TCP Dup ACKs: Wrong link aggregation configuration?

    Hi there, on our three node Proxmox/Ceph cluster we discovered many of the above TCP errors. We tracked it down to: Only outgoing traffic from a VM to any destination which is not on the same Proxmox node is affected. Each node is connected via 2x 10G to a switch. The related network...