Recent content by elimus_

  1. E

    [SOLVED] PVE v7 / PBS v2.1 - backup qmp timeouts

    It looks that pve-qemu-kvm=6.2.0-8 seems to solve the problem. Best follow previously mentioned topic.
  2. E

    [SOLVED] PVE v7 / PBS v2.1 - backup qmp timeouts

    Noticed: https://forum.proxmox.com/threads/possible-bug-after-upgrading-to-7-2-vm-freeze-if-backing-up-large-disks.109272/post-470734 To add. both of my areas also do NOT have krbd enable for CEPH storage. Haven't experimented. Can I enable this on the fly? Or I need to restart PVE node and...
  3. E

    [SOLVED] PVE v7 / PBS v2.1 - backup qmp timeouts

    Hello Thomas, Nodes: Staging - 3 nodes of both PVE and PVE CEPH Production - 12 PVE nodes, CEPH0 5 nodes, CEPH1 3 nodes Hardware: PVE / CEPH nodes: both Staging/Production area CPUs: 1-2 CPUs of AMD EYPCs 7xx1/7xx2 series RAM: 128G-256G CEPH disk/osd: Staging: 18 OSDs. 6 HDD per...
  4. E

    [SOLVED] PVE v7 / PBS v2.1 - backup qmp timeouts

    ps. Not sure if this should be in PVE section or PBS section. Any ideas/suggestion on what next I could pursue for debugging this issue? Summary With v7 I'm starting to get periodic or sometimes constant issues with PVE/PBS backups Backups task trow errors about backup timeouts on certain...
  5. E

    [SOLVED] migration plan between v5 to v7 cluster.

    Hello, I'm trying to think of a best way to upgrade my now outdated v5.4 cluster to v7. At the moment I am thinking of simply reinstalling the nodes to v7 and migrate using VM conf files and shared ceph rbd pool that is used as storage (separate hardware, also still running outdated v5, but...
  6. E

    [SOLVED] Nested lab setup problem with PVE as VM with it's L3 VMs stalling soon after launch.

    Well, it turns out that it really was the kernel version. - 5.8.0-2 -- Results in almost instant PVE restart as soon as L3 VM starts to load it's kernel... - 5.7.14-1 -- Stalls as originally mentioned. - 5.4.57-1 -- SUCCESS - stable nested setup
  7. E

    [SOLVED] Nested lab setup problem with PVE as VM with it's L3 VMs stalling soon after launch.

    I have the exact same model. At least now I know that It should work. Thanks for comment, as that at least gives me hope that I should be able to get this working. Will try with latest v5.8 kernel that just released for Manjaro. If that fails, then next thing is to start experimenting with...
  8. E

    [SOLVED] Nested lab setup problem with PVE as VM with it's L3 VMs stalling soon after launch.

    I am attempting to stabilize lab setup with PVE running in nested configuration on my Threadripper host. So far for some reason, if I launch VM on this PVE VM with Hardware virtualization enabled for it just stalls. PVE VM and L3 VM freezing almost at the same time. So far I haven't been able...
  9. E

    PBS test, restore permissions issue.

    Thanks for suggestions. More details on this please? I am logged in as "root@pam" on PVE/PBS, yes. But PBS stores on PVE, have only been added using "archiver@pbs" credentials for auth on PVE side. In datastore interface for test store, owner for backups also seem to be correct, as far as I...
  10. E

    PBS test, restore permissions issue.

    It should have? As mentioned I use "DatastorePowerUser" role for it. Or I have misunderstood something in permission scheme? Before that also tried "DatastoreBackup" with same results.
  11. E

    PBS test, restore permissions issue.

    Testing PBS in lab and atm I have setup up and running. (Both hosts running latest packages) * I can create backups from PVE * View them in storage view from PVE But I cannot restore. I get error Error: PVE side: Error: HTTP Error 400 Bad Request: no permissions TASK ERROR: command...
  12. E

    [SOLVED] probably - ceph performance graph weirdness

    Have any other ceph users noticed weirdness with performance graph. Where one read or write does not seem to reflect real situation? Mine currently shows this and I think that it's a bit off... Specifically looking at Reads... for +-50 VMs this is weird. One thing to say, that it was after...
  13. E

    Ceph OSD disk replacement

    One more note for those that might look at this thread for HOWTO in future as I did now. To unmount/disable/power down the sata device, IF system hasn't already done that for you > echo 1 > /sys/block/(whatever)/device/delete...
  14. E

    CEPH cluster. Wanted a comment from other PVE ceph users on ram usage per node.

    I think most was eaten up by ceph osd processes. Interestingly I remember seeing them go over that 4294967296 default osd_memory_target, but I will have to check again if it will happen again in current release (pve 5.4-6 | ceph 12.2.12) I think that I did miscalculate when planing nodes and...
  15. E

    CEPH cluster. Wanted a comment from other PVE ceph users on ram usage per node.

    Nodes are still on pve 5.3-11 w/ ceph: 12.2.11-pve1. This version afaik already comes with bluestore ram cache by default And. Yes, I used the default 4G. For start I didn't see the need to change this. ceph daemon osd.0 config show | grep memory_target "osd_memory_target": "4294967296"...