[SOLVED] VMs not starting after proxmox 6 to 7 upgrade

iruindegi

Renowned Member
Aug 26, 2016
54
0
71
Zarautz
Hi,
I just upgraded my proxmox 6 to 7.1 without errors but now no one of my VM is starting. I see "Booting from Hard Disk...." and is stuck there.

I just created a new VM (next next next) with Ubuntu iso and happens the same. this is the vm config:

Code:
boot: order=scsi0;ide2;net0
cores: 2
ide2: local:iso/ubuntu-21.10-desktop-amd64.iso,media=cdrom
memory: 2048
meta: creation-qemu=6.1.0,ctime=1640257561
name: ubuntu
net0: virtio=76:6F:73:49:0D:59,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-102-disk-0,size=80G
scsihw: virtio-scsi-pci
smbios1: uuid=135b3ff0-2094-4bb9-b2da-e67656341eb1
sockets: 1
vmgenid: 8dcfb9a8-fafe-4ab9-a3ba-ccab80b12aef
 
hi,

did you reboot after the upgrades?

please also post pveversion -v output here.

I just created a new VM (next next next) with Ubuntu iso and happens the same
so the ubuntu iso boots fine?
 
yes I did reboot. This is the output:
Code:
➜  ~ pveversion -v
proxmox-ve: 7.1-1 (running kernel: 5.4.128-1-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.4: 6.4-11
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
 
yes I did reboot.
are you sure? it seems you're still running the older kernel (but newer one is also installed):
Code:
proxmox-ve: 7.1-1 (running kernel: 5.4.128-1-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.4: 6.4-11
pve-kernel-5.13.19-2-pve: 5.13.19-4
 
are you booting grub or systemd-boot? [0]

maybe for some reason the installed kernel wasn't added into the boot options

[0]: https://pve.proxmox.com/wiki/Host_Bootloader
I executed
Code:
➜  ~ efibootmgr -v
EFI variables are not supported on this system.

So if I understand correctly it means that Grub is being used. Now, how can I fix my problem?
I try update-grub and reboot but still not working
 
So if I understand correctly it means that Grub is being used. Now, how can I fix my problem?
I try update-grub and reboot but still not working
did you modify your default grub config in /etc/default/grub in any way? can you paste the file here?
 
here it is
Code:
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
#   info -f grub -n 'Simple configuration'

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"
 
okay that looks like the default one.

what do you get with proxmox-boot-tool status?

could you also post the output from find /boot?

you can also post/attach the /boot/grub/grub.cfg file (that's generated when you run update-grub from the default conf)
 
Last edited:
thanks for the outputs so far :)

but that's weird, the grub menu has the newer kernel showing up as the first entry...

do you have physical access to the machine? if yes then you could watch it boot.

otherwise i'm not sure what's the problem here, it just seems to ignore the grub menu even though the kernel is installed?
you could also try setting the default option to specifically to the menu entry of the newer kernel also.
check my post here [0] for instructions, you'll need to adapt the entry according to your grub.cfg


[0]: https://forum.proxmox.com/threads/revert-to-prior-kernel.100310/#post-434580
 
I found the problem!. The fact is that the server is a HP Microserver Gen8 and I forgot that I have a USB hard drive with a grub to make boot from a ssd. I restored it and now is working ok

Code:
➜  ~ pveversion -v           
proxmox-ve: 7.1-1 (running kernel: 5.13.19-2-pve)
pve-manager: 7.1-8 (running version: 7.1-8/5b267f33)
pve-kernel-helper: 7.1-6
pve-kernel-5.13: 7.1-5
pve-kernel-5.4: 6.4-11
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.4.157-1-pve: 5.4.157-1
pve-kernel-5.4.143-1-pve: 5.4.143-1
pve-kernel-5.4.140-1-pve: 5.4.140-1
pve-kernel-5.4.128-1-pve: 5.4.128-2
pve-kernel-5.4.124-1-pve: 5.4.124-2
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.34-1-pve: 5.4.34-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.0
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.0-14
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.0-4
libpve-storage-perl: 7.0-15
libqb0: 1.0.5-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.2.0-3
proxmox-backup-client: 2.1.2-1
proxmox-backup-file-restore: 2.1.2-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-4
pve-cluster: 7.1-2
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-3
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.1-pve3
➜  ~
 
I found the problem!. The fact is that the server is a HP Microserver Gen8 and I forgot that I have a USB hard drive with a grub to make boot from a ssd. I restored it and now is working ok
great, that makes sense..

do your VMs start normally now??

you can mark the thread [SOLVED] so others will know what to expect :)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!