Recent content by quanto11

  1. Q

    Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    Hey fiona, Here are the logs. I'll run the command you provided during the next stuck backup and let you know right away. That could take a few days. The behavior cannot be easily reproduced.
  2. Q

    Super slow, timeout, and VM stuck while backing up, after updated to PVE 9.1.1 and PBS 4.0.20

    I can confirm this behavior. The problem started for us on November 6. We are using a Ceph 3/2 cluster system with enterprise hardware, and PBS has been running continuously for two years without any problems. The problem does not always occur on the same VM or host, but jumps sporadically...
  3. Q

    [SOLVED] Proxmox VE 9.0 BETA LCX Docker not working

    @wbumiller updated the bugtracker with a working solution: Can confirm, after adding "lxc.apparmor.raw: allow mqueue," to the config, everything is working fine. All LXC Container are up and running. Thank you for your support!
  4. Q

    [SOLVED] Proxmox VE 9.0 BETA LCX Docker not working

    Hey Guys, I updated to VE 9.0 Beta, and since then, my LCX Docker apps haven't been running. Every container is showing the same error message: Does anyone have a solution for this? PS: i m also not able to restore a backup
  5. Q

    OSD's keep crashing down/in, down/out

    It was me. I removed my post because there seems to be something wrong with your setup. like bl1mp said, you need to reconfigure your setup for ceph. i can recommend the routed simple method. easy to configure, simple to understand, rock stable. I do not understand, why every node got...
  6. Q

    After upgrading to Ceph Reef, getting DB spillover.

    the method nh2 mentioned works. Here is an example for osd1 for those who do not want to search for it themselves:
  7. Q

    Inconsistent Disk Usage - VM Crashing

    10% is spare, you should never overprovisioning storage space, which could result into data loss. I think you are able to unmount while running the Server. You need to take the Volume offline via Windows, detach via Proxmox and recreate it with the needed settings, like discard on. Not 100% sure
  8. Q

    [SOLVED] VM langsam seit Umstellung auf host-cpu

    Habt ihr einen Link für mich, wo über dieses Problem diskutiert wird? Ich hatte ein ähnliches Verhalten auf einem sehr langsamen ceph storage nachvollziehen können, welches auf HDDs basiert und die Queue für HDDs auf ein nicht auszuhaltendes Niveau anhebt, sodass das Ceph Volume im Windows...
  9. Q

    [SOLVED] VM langsam seit Umstellung auf host-cpu

    Ich welcher Konstellation zickt v266 wie Sau? Hier laufen knapp 100 VMs seit Dezember mit iscsi problemfrei.
  10. Q

    Inconsistent Disk Usage - VM Crashing

    you should never set the storage of a VM higher or equal to that of the storage, always leave at least 10% free. In addition, your VM does not have “Discard On”, so that deleted data is not reported to the host, so that your storage is now displayed as full, although 1/4 of the storage is free...
  11. Q

    Planung Proxmox Cluster + Storage

    das ist meiner Meinung nach vollkommen Absurd, sowas wird man vielleicht in sehr kleinen Unternehmen mit kleinen VMs umsetzen können, jedoch alles was über 200Gb groß ist, ist mit einem erhöhtem Zeitlichen Verlust zu rechnen, ganz zu schweigen wenn es mal zu Hardware Problemen kommt.
  12. Q

    Ceph fails after power loss: SLOW_OPS, OSDs flip between down and up

    Are the time settings of each Cluster member correct, Mons and Manager Up? Can you provide the follwing Details: ceph status ceph osd tree ceph osd df ceph pg dump pgs_brief You can try to: ceph osd set norecover ceph osd set nobackfill ceph osd set noup ceph osd set nodown After that ceph...