It does not work on latest release proxmox 7.2-4.
Not to mention that after manually patching, a lot of errors show up regarding Cloudinit.pm when trying to regenerate a cloudinit drive for a VM.
This is reffering to: sub configdrive2_gen_metadata {
Password for windows administrator is never...
There is no way to recover anything. The bug seems similar, because when running backups I/O usage and resource usage is probably higher than normal, though we did not ran any backups but we did started a remote rsync restoration to a VM , this caused high I/O and other resource usage which led...
Can't be restored, disks exists but partition table was wiped.
No backup was running, only what I first specified in the first post .
It should get more priority because this is very alarming and can happen in a production environment.
That is what I also thought.
Now, the problem is that we are afraid to try again because we have no idea what else could happen, as there are more VMs running inside this cluster, if you have any other idea what should we check for please let us know.
Hello,
Thank you for your explanation.
Does what you mention applies if this specific VM was create like 1 month ago? Was shutdown, started/rebooted several times before starting the migration?
What I mean is that the changes were surely commited to the disks as those were created long time...
Hello,
We are running multiple VMs in the following environment: proxmox cluster with ceph storage - block storage - all osds are enterprise SSDs (RBD pool 3 times replicated).
ceph version: 15.2.11
All nodes inside the cluster have exactly this following version...
I believe he was talking about the storage itself, not the drive. You can see and change that from Datacenter -> Storage.
The ceph storage, when editing, should have the option KRBD there.
Though some of us might notice performance difference, thats why I will wait till they will fix this issue.
What is your ceph version and are you using bluestore?
Have a look at osd_memory_target which I believe in newest versions it defaults to 4GB.
I do believe that the OSDs should not eat more memory than they are told to, correct me if I'm wrong.
We had the same issue on 2 clusters, on all newly fresh installed nodes.
As above stated, wait until the time catches up with the timestamp in the rrd database and the errors will be gone
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.