Dear forum members,
Probably a few weeks ago, during a PVE 7.x update (PVE 7 was previously installed), all my VMs (QEMU) with Cloud-Init and SeaBIOS stopped starting. During the boot process it looks like the partition is not found and the SeaBIOS remains in an infinite loop: try to start the...
Hi @avw
Very good point, it works !
I haven't seen this change (cgroup -> cgroup2) in the last major upgrade between PVE 6.x to 7.x.
Many thanks,
Regards,
Dear PVE users,
Previously I was running smoothly a LXC container with USB passthrough on PVE 6.4.x.
After an upgrade from PVE 6.4.x. to PVE 7.0.x this container cannot access anymore to the USB device.
From PVE server:
:~# test -w /dev/ttyACM0 && echo success || echo failure
success
:~# ls...
Hello Fabian,
I've generated a VG backup using vgcfgbackup pve on both nodes and there are identical except, of course, the node name and the lvmid (PV & VG).
I have no idea of why only one node upgrade gone wrong, so I've decided to do a fresh install of PVE 7.x on this node and it was a...
Hello @fabian
I performed additional tests, I downgraded the grub version.
After the 6.x to 7.x upgrade the grub version is grub2/testing 2.04-19 amd64
I downgraded the grub version to the PVE 6.x based on buster which worked well. grub2/now 2.02+dfsg1-20+deb10u4 amd64
I've added these...
"Grub fails to find LVM volume after previous LV rename" seems to be different because I hadn't renamed or changed the LVM.
Do you have any other tess or suggestions ?
I hesitate to reinstall everything.
The auto-generated grub.cfg is really different on the non-working node compared to the working nodes.
On the non-working node, the reboot remains ok but I'm not comfortable to use it with a potential grub failure.
Probably found the origin of this issue.
On the non-working node, the /boot/grub/grub.cfg content is really different
A lot of info is missing on the non-working node
Extract for the non-working node
.....
.....
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod...
Same results on a non-working node and a working node
:~# grub-probe --target=device /
/dev/mapper/pve-root
:~# grub-probe --target=device /boot
/dev/mapper/pve-root
Perhaps a clue for the non-working node
VG UUID is OK
PV UUID looks different
@fabian please let me know what you think ?
YAGA
:~# pvs -a
PV VG Fmt Attr PSize PFree
/dev/nvme0n1 ---...
Hello Fabian,
Here is the info for a non-working node
# lvmconfig --typeconfig full devices/global_filter
global_filter="r|/dev/zd.*|"
# cat /etc/default/grub /etc/default/grub.d/*
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full...
Hi Proxmox team and Proxmox users,
As suggested by @fabian here is a new thread for this issue.
Congratulations to the team for the Proxmox 7 release.
I've upgraded a 4-node cluster with nvme ssd drive (nvme0n1) for filesystem and Ceph (sda, sdb) from the latest 6.x to 7.x.
Nodes are...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.