Search results

  1. E

    [SOLVED] PVE v7 / PBS v2.1 - backup qmp timeouts

    It looks that pve-qemu-kvm=6.2.0-8 seems to solve the problem. Best follow previously mentioned topic.
  2. E

    [SOLVED] PVE v7 / PBS v2.1 - backup qmp timeouts

    Noticed: https://forum.proxmox.com/threads/possible-bug-after-upgrading-to-7-2-vm-freeze-if-backing-up-large-disks.109272/post-470734 To add. both of my areas also do NOT have krbd enable for CEPH storage. Haven't experimented. Can I enable this on the fly? Or I need to restart PVE node and...
  3. E

    [SOLVED] PVE v7 / PBS v2.1 - backup qmp timeouts

    Hello Thomas, Nodes: Staging - 3 nodes of both PVE and PVE CEPH Production - 12 PVE nodes, CEPH0 5 nodes, CEPH1 3 nodes Hardware: PVE / CEPH nodes: both Staging/Production area CPUs: 1-2 CPUs of AMD EYPCs 7xx1/7xx2 series RAM: 128G-256G CEPH disk/osd: Staging: 18 OSDs. 6 HDD per...
  4. E

    [SOLVED] PVE v7 / PBS v2.1 - backup qmp timeouts

    ps. Not sure if this should be in PVE section or PBS section. Any ideas/suggestion on what next I could pursue for debugging this issue? Summary With v7 I'm starting to get periodic or sometimes constant issues with PVE/PBS backups Backups task trow errors about backup timeouts on certain...
  5. E

    [SOLVED] migration plan between v5 to v7 cluster.

    Hello, I'm trying to think of a best way to upgrade my now outdated v5.4 cluster to v7. At the moment I am thinking of simply reinstalling the nodes to v7 and migrate using VM conf files and shared ceph rbd pool that is used as storage (separate hardware, also still running outdated v5, but...
  6. E

    [SOLVED] Nested lab setup problem with PVE as VM with it's L3 VMs stalling soon after launch.

    Well, it turns out that it really was the kernel version. - 5.8.0-2 -- Results in almost instant PVE restart as soon as L3 VM starts to load it's kernel... - 5.7.14-1 -- Stalls as originally mentioned. - 5.4.57-1 -- SUCCESS - stable nested setup
  7. E

    [SOLVED] Nested lab setup problem with PVE as VM with it's L3 VMs stalling soon after launch.

    I have the exact same model. At least now I know that It should work. Thanks for comment, as that at least gives me hope that I should be able to get this working. Will try with latest v5.8 kernel that just released for Manjaro. If that fails, then next thing is to start experimenting with...
  8. E

    [SOLVED] Nested lab setup problem with PVE as VM with it's L3 VMs stalling soon after launch.

    I am attempting to stabilize lab setup with PVE running in nested configuration on my Threadripper host. So far for some reason, if I launch VM on this PVE VM with Hardware virtualization enabled for it just stalls. PVE VM and L3 VM freezing almost at the same time. So far I haven't been able...
  9. E

    PBS test, restore permissions issue.

    Thanks for suggestions. More details on this please? I am logged in as "root@pam" on PVE/PBS, yes. But PBS stores on PVE, have only been added using "archiver@pbs" credentials for auth on PVE side. In datastore interface for test store, owner for backups also seem to be correct, as far as I...
  10. E

    PBS test, restore permissions issue.

    It should have? As mentioned I use "DatastorePowerUser" role for it. Or I have misunderstood something in permission scheme? Before that also tried "DatastoreBackup" with same results.
  11. E

    PBS test, restore permissions issue.

    Testing PBS in lab and atm I have setup up and running. (Both hosts running latest packages) * I can create backups from PVE * View them in storage view from PVE But I cannot restore. I get error Error: PVE side: Error: HTTP Error 400 Bad Request: no permissions TASK ERROR: command...
  12. E

    [SOLVED] probably - ceph performance graph weirdness

    Have any other ceph users noticed weirdness with performance graph. Where one read or write does not seem to reflect real situation? Mine currently shows this and I think that it's a bit off... Specifically looking at Reads... for +-50 VMs this is weird. One thing to say, that it was after...
  13. E

    Ceph OSD disk replacement

    One more note for those that might look at this thread for HOWTO in future as I did now. To unmount/disable/power down the sata device, IF system hasn't already done that for you > echo 1 > /sys/block/(whatever)/device/delete...
  14. E

    CEPH cluster. Wanted a comment from other PVE ceph users on ram usage per node.

    I think most was eaten up by ceph osd processes. Interestingly I remember seeing them go over that 4294967296 default osd_memory_target, but I will have to check again if it will happen again in current release (pve 5.4-6 | ceph 12.2.12) I think that I did miscalculate when planing nodes and...
  15. E

    CEPH cluster. Wanted a comment from other PVE ceph users on ram usage per node.

    Nodes are still on pve 5.3-11 w/ ceph: 12.2.11-pve1. This version afaik already comes with bluestore ram cache by default And. Yes, I used the default 4G. For start I didn't see the need to change this. ceph daemon osd.0 config show | grep memory_target "osd_memory_target": "4294967296"...
  16. E

    CEPH cluster. Wanted a comment from other PVE ceph users on ram usage per node.

    Currently as all nodes are under load and memory consumption is around 90-95% on each of them. CEPH cluster details: * 5 nodes in total, all 5 used for OSD's 3 of them also used as monitors * All 5 nodes currently have 64G ram * OSD's 12 disks in total per node - 6x6TB hdd and 6x500G ssd. *...
  17. E

    [SOLVED] cehp menus stuck on "loading" after last monitor node reboot

    So I was changing some network related settings on 5 node ceph cluster (still in configuration/testing stage). To apply settings I rebooted nodes (3 of them monitors/managers, all five are osd hosts, sll nodes pve-v5.3-11) one by one while waiting till previous one comes back up and and then...
  18. E

    KVM module access lost after upgrade from v5.0beta to v5.0?

    Thank you for reply. Thanks to your suggestion to try older kernel version I actually noticed what I had done wrong... I used to run this on Intel machine and I had changed to AMD hardware with the same system disk. The problem was that I had left nested configuration that works for kvm-intel...
  19. E

    KVM module access lost after upgrade from v5.0beta to v5.0?

    I am encountering an interesting problem. I was upgrading my homelab proxmox installation from 5.0-5/c155b5bc build to 5.0-30 build. And now after upgrade I encounter issue that I can no longer run VMs as system thinks that KVM is not accessible. ... root@pve:~# qm start 101 Could not access...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!