pve 8.0 and 8.1 hangs on boot

Another Dell R340 Server here with the same problem booting to 6.5.11-7-pve. I tried updating intramfs as outlined earlier in this post and that did not help. I did not get a chance to boot with nomodeset as it's the end of my shift. This is after a pve 7to8 upgrade. NOTE: This was not listed in known issues!, which is annoying

Please let me know what information I can pull from the server to help debug this.
 
Hi guys, I found that if you disable x2APIC in BIOS under processor configuration it works fine with the new kernel. It seems that the problem is related with how the new kernel handles multi threading but because R240 is a one cpu server this option is not needed for me at least.
 
thank you for your contribution, that also resolved my issue right away. Booted to the new kernel, hyper-threading works for the single socket.

thanks again for poking around!
 
  • Like
Reactions: zhivko
Hi guys, I found that if you disable x2APIC in BIOS under processor configuration it works fine with the new kernel. It seems that the problem is related with how the new kernel handles multi threading but because R240 is a one cpu server this option is not needed for me at least.
Can confirmt it, it also works on Dell T140. Thank you
 
  • Like
Reactions: tomazas
Hi guys, I found that if you disable x2APIC in BIOS under processor configuration it works fine with the new kernel. It seems that the problem is related with how the new kernel handles multi threading but because R240 is a one cpu server this option is not needed for me at least.
This worked for me as well. thanks!
 
On an asus I enabled the X2APIC opt-out to no avail. Tried with kernel 6.2 .

1704134186020.png

I managed to get the system functionnal again after several trials and "errors". The following is more or less what worked for me:

Bash:
# 'lsblk' and 'fdisk -l' can help identify the partitions
export lang=C
export LC_ALL=C.UTF-8
mount /dev/md127 /mnt
mount /dev/md126 /mnt/boot
mount /dev/nvme0n1p1 /mnt/boot/efi
mount --rbind /dev /mnt/dev
mount --rbind /proc /mnt/proc
mount --rbind /sys /mnt/sys
chroot /mnt
# In chroot

# Add loglevel=7 to following in /etc/default/grub
# GRUB_CMDLINE_LINUX_DEFAULT="loglevel=7"

export LC_ALL=C.UTF-8
apt-get install console-setup
dpkg-reconfigure grub-efi-amd64
update-grub
apt-get install pve-kernel-6.2.16-20-pve
proxmox-boot-tool kernel pin 6.2.16-20-pve
grub-install

# However, the reinstallation of grub above is not perfect.
# A copy of /boot to /bootcopy helped during boot
#  For some reason /boot was not properly mapped.

# I managed to properly boot by editing the grub boot option
#  and changing /boot/ to /bootcopy in the 'linux' and 'initrd' instructions.
cp -Rp /boot /bootcopy

Now reboot, and set working paths in the 'linux' and 'initrd' lines of the target.

After reboot regenerate grub:

Bash:
update-grub
grub install

In the end, I could boot on 6.5.11-7-pve - so possibly there were issues with grub rather than issues with the kernel.
I noticed there is "proxmox-boot-tool reinit" and "proxmox-boot-tool init" - so maybe these could also help .


EDIT:
As I was examining the journal this morning, I note that the server was actually restarting without ever showing the prompt - and not reachable as far as I remember. It could be that the kernel version is important - I did notice anything meaningful in my examination of the Log differences (slightly redacted) between the 6.5.11-4-pve startup and 6.5.11-7-pve startup (lines prefixed with '>' correspond to the successful startup). I also cleaned up a hostname change.
So it seems likely thgat installing and pinning to 6.2.16-20-pve allowed the first successful reboot, and then updating the system while installing 6.5.11-7-pve allowed a succesful startup from the 6.5 kernel.
 

Attachments

Last edited:
i can confirm on a dell T140 with boss-ssd raid 1 and FW 2.15.1 - > https://forum.proxmox.com/threads/pve-8-0-and-8-1-hangs-on-boot.137033/post-620284 #42 which suggests: "I found that if you disable x2APIC in BIOS under processor configuration it works fine with the new kernel." works as the only solution (for me after a lot of tryouts) with kernel 6.5 - elsewise i would have had to stick with pinning to Kernel 6.2
thank you all very much!
alexander
 
Hey y'all.

I'm experiencing this issue with Proxmox 8.1 on a Dell R720. I'm seeing the various fixes people are proposing, involving editing grub files or `/etc/initramfs-tools/modules` or things like that...

How do I do that though if I can't boot?

I've got idrac running, but if I'm not booted I can't edit files on my os.
Do I need to boot from a proxmox or debian iso image or something?
 
  • Like
Reactions: akulbe
Should I be regretting overwriting my vSphere install for this? Fresh install of 8.1 on a Supermicro-based system. Same thing. Hanging at boot.

Like @chrispitzer - I have the same question. Recovery mode doesn't even boot.

My Proxmox Server:
SuperMicro X10DRL-i motherboard
2 x Xeon E5-2699v4 CPUs
512GB RAM
5 x 2TB SSDs (ZFS RAID10)
Nvidia Quadro M4000
 
Last edited:
  • Like
Reactions: mikeys
Yep, that did the trick :)
The fix enables booting on these kernels:
- 5.15.126-1-pve
- 6.2.16-19-pve
- 6.5.11-4-pve
Thank's Thomas :cool:

To recap in case other comes to this thread:
echo "simplefb" >> /etc/initramfs-tools/modules
update-initramfs -u -k 6.5.11-4-pve (or update-initramfs -u -k all)
# reboot

My configuration in /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="nomodeset amd_iommu=on iommu=pt"
Hello.

Looks like I have the same problem.

While I am in the "Loading initial ramdisk" screen, how do I actually enter the commands you sent?
I know I have to enter these commands in a terminal. But how do I actually enter the terminal?

I am a complete newbie. Any suggestions will be helpful. Thanks!
 
Hey y'all.

I'm experiencing this issue with Proxmox 8.1 on a Dell R720. I'm seeing the various fixes people are proposing, involving editing grub files or `/etc/initramfs-tools/modules` or things like that...

How do I do that though if I can't boot?

I've got idrac running, but if I'm not booted I can't edit files on my os.
Do I need to boot from a proxmox or debian iso image or something?
You can boot to the rescue shell provided by the boot menu.
That runs even when a normal boot isn't possible.
 
Hi guys, I found that if you disable x2APIC in BIOS under processor configuration it works fine with the new kernel. It seems that the problem is related with how the new kernel handles multi threading but because R240 is a one cpu server this option is not needed for me at least.
Fixed my issue. Thanks!
 
Yep, that did the trick :)
The fix enables booting on these kernels:
- 5.15.126-1-pve
- 6.2.16-19-pve
- 6.5.11-4-pve
Thank's Thomas :cool:

To recap in case other comes to this thread:
echo "simplefb" >> /etc/initramfs-tools/modules
update-initramfs -u -k 6.5.11-4-pve (or update-initramfs -u -k all)
# reboot

My configuration in /etc/default/grub:
GRUB_CMDLINE_LINUX_DEFAULT="nomodeset amd_iommu=on iommu=pt




me too,but i use your config set,cant boot,show loading initxxx radxxxxx
 
Hi guys, I found that if you disable x2APIC in BIOS under processor configuration it works fine with the new kernel. It seems that the problem is related with how the new kernel handles multi threading but because R240 is a one cpu server this option is not needed for me at least.
thank you <3
 
That shouldn't be necessary, in my opinion. I think with the way the Broadcom/VMware debacle has gone, Proxmox is going to see a LOT more traffic. They need to absolutely NAIL getting a new machine up and running. This experience was terrible, and I ended up going back to vSphere.

I had a working hypervisor in 10 minutes.
 
  • Like
Reactions: mikeys
That shouldn't be necessary, in my opinion. I think with the way the Broadcom/VMware debacle has gone, Proxmox is going to see a LOT more traffic. They need to absolutely NAIL getting a new machine up and running. This experience was terrible, and I ended up going back to vSphere.

I had a working hypervisor in 10 minutes.
With all due respect, your opinion is based on your lack of experience in the Debian + Proxmox system
 
  • Like
Reactions: zhivko
That shouldn't be necessary, in my opinion. I think with the way the Broadcom/VMware debacle has gone, Proxmox is going to see a LOT more traffic. They need to absolutely NAIL getting a new machine up and running. This experience was terrible, and I ended up going back to vSphere.

I had a working hypervisor in 10 minutes.
I installed virtualbox in 3 minutes.. had a working hypervisor. I installed windows server 2022.. had a working hypervisor.... xcpng and so on. This is foolish thinking.

PVE offers so much more than both of those solutions. Notice how i didn't say "better" .. depending on use case, any hypervisor can be a great choice.

I noticed all of your post history is all about migrating away from vsphere and vmware products. Maybe you should have familiarized yourself with the process before jumping in with production VMs.

Most of what you were asking about is in the documentation given to us by the PVE team and various contributors. For a "free" product, without paid support, the issue was mediated within 3 weeks. That's pretty good considering they don't work for me, or for you.

However, you seem to have made up your mind so please don't try to poison the well for prospective new users who may happen upon this.
Proxmox is extremely easy to install and administer from my point of view.

Enjoy the new vmware fee structure.
 
  • Like
Reactions: IT ProCare and UdoB
I installed virtualbox in 3 minutes.. had a working hypervisor. I installed windows server 2022.. had a working hypervisor.... xcpng and so on. This is foolish thinking.

PVE offers so much more than both of those solutions. Notice how i didn't say "better" .. depending on use case, any hypervisor can be a great choice.

I noticed all of your post history is all about migrating away from vsphere and vmware products. Maybe you should have familiarized yourself with the process before jumping in with production VMs.

Most of what you were asking about is in the documentation given to us by the PVE team and various contributors. For a "free" product, without paid support, the issue was mediated within 3 weeks. That's pretty good considering they don't work for me, or for you.

However, you seem to have made up your mind so please don't try to poison the well for prospective new users who may happen upon this.
Proxmox is extremely easy to install and administer from my point of view.

Enjoy the new vmware fee structure.

The context I didn't provide here was that I did dig in and learn some stuff about Proxmox, ahead of time. I installed Proxmox on a different machine, a workstation... not a server, then I migrated workloads from vSphere to Proxmox. It was working fine. That wasn't the final goal, though. The idea was to have a sort of middle ground. Keep things running, while I set up Proxmox on the real server.

When I tried to set it up on the actual server hardware, that's where things went south.

You might want to tinker with your hypervisor. I would rather it just work, and stay out of the way.

I haven't made up my mind. I'm still a paying Proxmox Subscriber. I'm not gonna pull punches about my experience with it.

My apologies if that offends you, that's not my intent.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!