Hi peter,
Thanks, I'm also considering this option but I would like to move VM and LXC data disk from my previous lvm pve partition to the new partition.
Hi,
I also had issues with grub /usr/sbin/grub-probe: error: disk `lvmid/xxxxxxxxxxxx' not found.
I've done that:
Now grub is ok but I have another issue during the boot process.
And I only have BusyBox prompt after that.
I've try to fix the bad block while booting Proxmox from CD in...
Hi @dpl
Thanks for your message, it's a good point. I'll try to do that.
But I would like to understand why my system which worked perfectly before with ceph FS has become so slow that it becomes unusable.
This is probably due to either a Proxmox PVE update or a Proxmox Ceph update.
Hi,
I've the same issues with CephFS (slow requests - slow ops, oldest one blocked for xxx sec - freezes...) but Ceph RBD is working properly.
CephFS is unusable for the backups. Now backups needs hours or even days.
This problem suddenly appeared several months ago with Ceph Pacific. No...
Hello Fabian,
Your test was very instructive.
I've added a live CD into the virtual CD rom drive on a non-booting VM. The live CD boots until the grub menu but it crashes immediately after selected the right setup in the grub menu...
After that, I've created a new VM with the live CD and this...
Hello Fabian,
@fabian Please let me know if you would like, and if you have time, to check a non booting VM (bootloop in SeaBIOS) migrated from PVE 6.x to 7.x ?
Regards
For several weeks now I have been stuck on this problem. All my old VMs created with Qemu / Cloud-Init since PVE 6.x no longer boot in PVE 7.x.
At the beginning they worked perfectly in PVE 7.x but an update (PVE update from community repo) carried out at the beginning of September now...
Hi Spirit,
Yes I do, I've tried i440fx latest, 6.0, 5.2, 5.1 without success.
I've also checked with lvdisplay the status of logical volumes for the each VM and there are all available in read/write access.
Unfortunately, I've no other clues at this time...
I found an old VM backup from July 2021 with PVE 6.4 which was functional because already tested.
VM Details: debian-10.6.1-20201023-openstack-amd64.qcow2 - QEMU VM - created: 2020/12/13 19:05:36
I made a restore of this backup today on the same server with the latest PVE 7.x community updates...
I have exactly the same screenshot during the SeaBIOS "boot loop".
I am "happy" not to be alone but surprised that there are only two of us having this problem.
Hi Spirit,
I've manually added a line with "boot: order=scsi0" in the VM config file but it doesn't help.
Even under SeaBIOS if I select the right partition to boot, it doesn't boot.
Since I've several nodes with many VMs on each node, I don't suspect a file corruption and, few weeks ago, all...
All my MVs run Debian 10.
I've no clue to debug since SeaBIOS is not able to boot the VM.
journalctl doesn't provide useful information in my case.
Any advice to debug will be very welcome,
Regards,
Hi,
I've several nodes and all of them are affected by the bootloop issue: all existing VM with Cloud-Init didn't boot anymore.
From now, I can create a new VM with Cloud-Init and it works.
2 nodes have been upgraded from PVE 6.x to PVE 7.x and all existing VM with Cloud-Init, originally...
Hi there,
I've exactly the same issue, VM with Cloud Init previously created from PVE 6.x are not working with the latest PVE 7.x update from community repository.
VM can't start, there are in a bootloop in SeaBIOS (Disk not found).
My PVE host also uses ext4 LVM storage.
With the latest PVE...
Unfortunately the last PVE updates today didn't solve the issue.
pve-firmware 3.3-2
pve-kernel-5.11 7.0-8
pve-kernel-5.11.22-5-pve 5.11.22-10
VM are always in a bootloop (CDROM of Disk not found).
CT still working properly.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.