Spent my Sunday afternoon doing the Proxmox 8 to 9 jump. Whole process took roughly 90 minutes with a few hiccups along the way. Writing this up since I had trouble finding current documentation when I got stuck.
Initial system scan
First step was running sudo pve8to9 --full to see what needed attention:
The systemd-boot error had me worried at first - the checker made it sound like removing it would brick the system. Quick investigation with bootctl status and efibootmgr -v revealed I was actually booting with GRUB all along. The systemd-boot package was just leftover cruft from who knows when. Safe to remove with sudo apt remove systemd-boot.
The microcode situation required adding non-free-firmware to my sources list, then installing the intel-microcode package followed by a reboot.
GRUB bootloader fix:
Once everything was addressed, the checker gave me a clean bill of health.
Running the upgrade
Modified repository sources to point at trixie instead of bookworm:
Kicked off the upgrade inside a screen session (always a good idea when working over SSH):
Roadblocks encountered
Package conflicts with Docker
Ran into file ownership conflicts between Debian's docker packages and the ones from Docker's official repository. The docker-compose-plugin package was blocking installation of Debian's docker-compose.
Solution was forcibly removing the conflicting packages:
Followed by sudo apt --fix-broken install to let the upgrade continue.
Configuration file updates
Several config files needed decisions. I kept my customized SSH config since it has specific security rules (no root access, conditional password auth based on source IP). For system files like GRUB and LVM that I'd never touched, I went with the maintainer's new versions.
Package dependency loops
Normal for major version jumps - ended up cycling through sudo dpkg --configure -a and sudo apt --fix-broken install several times until all dependencies resolved. Expected behavior from what I understand.
The aftermath
Version check after completion:
Everything looked solid. System came back up on kernel 6.14. Then I tried checking my containers...
Uh oh. Docker got removed when I cleaned up packages marked for autoremoval. Not ideal since this box is primarily a Docker host running production workloads.
Quick reinstall fixed it:
Thankfully all the actual container data in /var/lib/docker survived intact. Just needed to bring everything back online.
Strange Windows VM behavior
One Windows VM I run (has Signal desktop and Google Messages web apps) lost authentication on both services after the restart. Both required going through the linking process again.
Possible causes:
Minor inconvenience, just needed to rescan QR codes for both services.
Takeaways for next time
Overall pretty smooth once I sorted out the Docker repository conflicts. System's running stable on Debian 13 with kernel 6.14 now.
Anyone using Docker's official repos will likely hit similar package conflicts. Just be prepared to temporarily remove their packages and reinstall after the upgrade finishes.
Initial system scan
First step was running sudo pve8to9 --full to see what needed attention:
- systemd-boot package causing a hard failure
- Intel microcode not installed
- Issues with GRUB bootloader setup
- One active VM needed shutdown
The systemd-boot error had me worried at first - the checker made it sound like removing it would brick the system. Quick investigation with bootctl status and efibootmgr -v revealed I was actually booting with GRUB all along. The systemd-boot package was just leftover cruft from who knows when. Safe to remove with sudo apt remove systemd-boot.
The microcode situation required adding non-free-firmware to my sources list, then installing the intel-microcode package followed by a reboot.
GRUB bootloader fix:
Code:
echo 'grub-efi-amd64 grub2/force_efi_extra_removable boolean true' | sudo debconf-set-selections -v -u
sudo apt install --reinstall grub-efi-amd64
Once everything was addressed, the checker gave me a clean bill of health.
Running the upgrade
Modified repository sources to point at trixie instead of bookworm:
Code:
sudo sed -i 's/bookworm/trixie/g' /etc/apt/sources.list
sudo sed -i 's/bookworm/trixie/g' /etc/apt/sources.list.d/pve-*.list
Kicked off the upgrade inside a screen session (always a good idea when working over SSH):
Code:
screen -S upgrade
sudo apt update
sudo apt dist-upgrade
Roadblocks encountered
Package conflicts with Docker
Ran into file ownership conflicts between Debian's docker packages and the ones from Docker's official repository. The docker-compose-plugin package was blocking installation of Debian's docker-compose.
Solution was forcibly removing the conflicting packages:
Code:
sudo dpkg --remove --force-all docker-compose-plugin
sudo dpkg --remove --force-all docker-buildx-plugin
Followed by sudo apt --fix-broken install to let the upgrade continue.
Configuration file updates
Several config files needed decisions. I kept my customized SSH config since it has specific security rules (no root access, conditional password auth based on source IP). For system files like GRUB and LVM that I'd never touched, I went with the maintainer's new versions.
Package dependency loops
Normal for major version jumps - ended up cycling through sudo dpkg --configure -a and sudo apt --fix-broken install several times until all dependencies resolved. Expected behavior from what I understand.
The aftermath
Version check after completion:
Code:
pveversion
# pve-manager/9.0.11/3bf5476b8a4699e2
Everything looked solid. System came back up on kernel 6.14. Then I tried checking my containers...
Code:
docker ps
# Cannot connect to the Docker daemon...
Uh oh. Docker got removed when I cleaned up packages marked for autoremoval. Not ideal since this box is primarily a Docker host running production workloads.
Quick reinstall fixed it:
Code:
sudo apt install docker.io docker-compose containerd runc
sudo systemctl start docker
sudo systemctl enable docker
Thankfully all the actual container data in /var/lib/docker survived intact. Just needed to bring everything back online.
Strange Windows VM behavior
One Windows VM I run (has Signal desktop and Google Messages web apps) lost authentication on both services after the restart. Both required going through the linking process again.
Possible causes:
- Clock synchronization - VM was offline for about 80 minutes, time drift might have invalidated auth tokens
- Virtual network changes - MAC address or interface assignment could have shifted during the upgrade
- Improper shutdown - VM might have been in saved state rather than cleanly powered off
Minor inconvenience, just needed to rescan QR codes for both services.
Takeaways for next time
- Review autoremove candidates before actually removing anything
- Document which config files have been modified ahead of time
Overall pretty smooth once I sorted out the Docker repository conflicts. System's running stable on Debian 13 with kernel 6.14 now.
Anyone using Docker's official repos will likely hit similar package conflicts. Just be prepared to temporarily remove their packages and reinstall after the upgrade finishes.