Noticed: https://forum.proxmox.com/threads/possible-bug-after-upgrading-to-7-2-vm-freeze-if-backing-up-large-disks.109272/post-470734
To add. both of my areas also do NOT have krbd enable for CEPH storage.
Haven't experimented. Can I enable this on the fly? Or I need to restart PVE node and...
ps. Not sure if this should be in PVE section or PBS section.
Any ideas/suggestion on what next I could pursue for debugging this issue?
Summary
With v7 I'm starting to get periodic or sometimes constant issues with PVE/PBS backups
Backups task trow errors about backup timeouts on certain...
Hello,
I'm trying to think of a best way to upgrade my now outdated v5.4 cluster to v7. At the moment I am thinking of simply reinstalling the nodes to v7 and migrate using VM conf files and shared ceph rbd pool that is used as storage (separate hardware, also still running outdated v5, but...
Well, it turns out that it really was the kernel version.
- 5.8.0-2 -- Results in almost instant PVE restart as soon as L3 VM starts to load it's kernel...
- 5.7.14-1 -- Stalls as originally mentioned.
- 5.4.57-1 -- SUCCESS - stable nested setup
I have the exact same model. At least now I know that It should work. Thanks for
comment, as that at least gives me hope that I should be able to get this working.
Will try with latest v5.8 kernel that just released for Manjaro. If that fails, then
next thing is to start experimenting with...
I am attempting to stabilize lab setup with PVE running in nested configuration on
my Threadripper host. So far for some reason, if I launch VM on this PVE VM with
Hardware virtualization enabled for it just stalls. PVE VM and L3 VM freezing almost
at the same time.
So far I haven't been able...
Thanks for suggestions.
More details on this please? I am logged in as "root@pam" on PVE/PBS, yes. But PBS stores on PVE, have only been added using "archiver@pbs" credentials for auth on PVE side.
In datastore interface for test store, owner for backups also seem to be correct, as far as I...
It should have? As mentioned I use "DatastorePowerUser" role for it. Or I have misunderstood something in permission scheme?
Before that also tried "DatastoreBackup" with same results.
Testing PBS in lab and atm I have setup up and running. (Both hosts running latest packages)
* I can create backups from PVE
* View them in storage view from PVE
But I cannot restore. I get error Error:
PVE side:
Error: HTTP Error 400 Bad Request: no permissions
TASK ERROR: command...
Have any other ceph users noticed weirdness with performance graph. Where one read or write
does not seem to reflect real situation? Mine currently shows this and I think that it's a bit off...
Specifically looking at Reads... for +-50 VMs this is weird.
One thing to say, that it was after...
One more note for those that might look at this thread for HOWTO in future as I did now.
To unmount/disable/power down the sata device, IF system hasn't already done that for you
> echo 1 > /sys/block/(whatever)/device/delete...
I think most was eaten up by ceph osd processes. Interestingly I remember seeing them go over that 4294967296 default osd_memory_target, but I will have to check again if it will happen again in current release (pve 5.4-6 | ceph 12.2.12)
I think that I did miscalculate when planing nodes and...
Nodes are still on pve 5.3-11 w/ ceph: 12.2.11-pve1. This version afaik already comes with bluestore ram cache by default
And. Yes, I used the default 4G. For start I didn't see the need to change this.
ceph daemon osd.0 config show | grep memory_target
"osd_memory_target": "4294967296"...
Currently as all nodes are under load and memory consumption is around 90-95% on each of them.
CEPH cluster details:
* 5 nodes in total, all 5 used for OSD's 3 of them also used as monitors
* All 5 nodes currently have 64G ram
* OSD's 12 disks in total per node - 6x6TB hdd and 6x500G ssd.
*...
So I was changing some network related settings on 5 node ceph cluster (still in configuration/testing stage). To apply settings I rebooted nodes (3 of them monitors/managers, all five are osd hosts, sll nodes pve-v5.3-11) one by one while waiting till previous one comes back up and and then...
Thank you for reply.
Thanks to your suggestion to try older kernel version I actually noticed what I had done wrong... I used to run this on Intel machine and I had changed to AMD hardware with the same system disk. The problem was that I had left nested configuration that works for kvm-intel...
I am encountering an interesting problem. I was upgrading my homelab proxmox installation from 5.0-5/c155b5bc build to 5.0-30 build. And now after upgrade I encounter issue that I can no longer run VMs as system thinks that KVM is not accessible.
...
root@pve:~# qm start 101
Could not access...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.