Proxmox hangs after upgrade to v9

yunmkfed

Member
Sep 2, 2023
91
13
13
www.alanbonnici.com
Last Saturday I upgraded from Proxmox v8 to v9 based on the instructions downloaded from here. It went smoothly.

Today I noticed that the VM's and containers were down. The console had the following text on it and was not responsive.

The VM had been running smoothly without a hitch for months.

Any help to diagnose / resolve.
 

Attachments

  • photo_2025-10-09_16-52-16.jpg
    photo_2025-10-09_16-52-16.jpg
    429.7 KB · Views: 36
In the photo I can see exim4. Proxmox itself uses Postfix, not Exim. Is your system a standalone Proxmox or Proxmox on top of Debian?
 
Last edited:
Hi, looks like something happened during your upgrade between 8 to 9. Usually I'll do move out the VMs, then do a totally fresh install of PVE for major upgrade due to changes to the kernel.
I am thinking along those lines.

Would you know what config files I can save (backups, file shares, etc) to avoid having to recreate them.
 
For full-node rollback, a disk image (Clonezilla, etc.) is the quickest path.
To confirm that I am following your suggestions correctly:

1. Backup the VM/CT's
2. Take a full image of the server using Clonezilla
3. Wipe the server and install VE 9.0
4. Open the Clonezilla Image (<<------ is this possible?)
5 Extract the config files related to backups, remote devices, etc (<<------ would you know where there are located/called please)
6. Transfer the config files from (5)
7. Restore the VM/CT's

Thanks
 
Hi, I usually dont do this, i will only backup all the VMs, then format the node and install latest PVE. After that import the VMs back into the PVE. This way is cleaner.
v8 was working flawlessly :(

I'm hoping yo salvage some configurations files - I had NUT (UPS) working perfectly, LAN Bonding working well -- I can redo them if there are risks of not having a stable system.
 
The first lines of error messages are the important ones. Could you reboot and do a picture when they come again from beginning ?!
Full /etc should be saved when running so that mounted /etc/pve is included too.
Ans maybe any path where you put scripts like /usr/local/bin or some docu of your own in /root if any.
 
I would assume that if you set "start on boot off" for all vm+lxc, even stop all, then reboot just pve and there will not coming any errors on console after pve prompt. I think there is somethink with app_armor to one/more lxc/vm setting problematic yet because of other behaviour from v8 to v9 ?!?
Start an stop each vm/lxc separate to see which generate your problem and which ones are fine.