Afraid to Reboot because of error during last dist-upgrade

c0mputerking

Renowned Member
Oct 5, 2011
174
5
83
I did a dist-upgrade today and it popped up a dialog box you know the blue grey and red type it contained some information about grub2 (sadly i did not take a screenshot) and this dialog box asked if i wanted to continue of course i said yes. Then i got an error about not being able to write to the disk as it is full error is similar to the one below as i have tried a few things to make this error go away so may not be exact.

# apt-get dist-upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Done The following package was automatically installed and is no longer required: pve-kernel-5.15.35-2-pve Use 'apt autoremove' to remove it. The following packages will be REMOVED: pve-kernel-5.3.10-1-pve 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 284 MB disk space will be freed. Do you want to continue? [Y/n] y (Reading database ... 207818 files and directories currently installed.) Removing pve-kernel-5.3.10-1-pve (5.3.10-1) ... Examining /etc/kernel/postrm.d. run-parts: executing /etc/kernel/postrm.d/initramfs-tools 5.3.10-1-pve /boot/vmlinuz-5.3.10-1-pve update-initramfs: Deleting /boot/initrd.img-5.3.10-1-pve run-parts: executing /etc/kernel/postrm.d/proxmox-auto-removal 5.3.10-1-pve /boot/vmlinuz-5.3.10-1-pve run-parts: executing /etc/kernel/postrm.d/zz-proxmox-boot 5.3.10-1-pve /boot/vmlinuz-5.3.10-1-pve Re-executing '/etc/kernel/postrm.d/zz-proxmox-boot' in new private mount namespace.. No /etc/kernel/cmdline found - falling back to /proc/cmdline Copying and configuring kernels on /dev/disk/by-uuid/68B1-9000 Copying kernel 5.15.39-1-pve Copying kernel 5.15.64-1-pve Copying kernel 5.4.189-1-pve cp: error writing '/var/tmp/espmounts/68B1-9000/initrd.img-5.4.189-1-pve': No space left on device run-parts: /etc/kernel/postrm.d/zz-proxmox-boot exited with return code 1 Failed to process /etc/kernel/postrm.d at /var/lib/dpkg/info/pve-kernel-5.3.10-1-pve.postrm line 14. dpkg: error processing package pve-kernel-5.3.10-1-pve (--remove): installed pve-kernel-5.3.10-1-pve package post-removal script subprocess returned error exit status 1 dpkg: too many errors, stopping Errors were encountered while processing: pve-kernel-5.3.10-1-pve Processing was halted because there were too many errors. E: Sub-process /usr/bin/dpkg returned an error code (1)

While trying to fix this i came across this post http://www.techpository.com/linux-error-with-update-initramfs-no-space-left-on-device/

I made the changes he suggested even tried MODULES=all as well as MODULES=mod but i get this error I also tried removing some old kernels but that did not go well either


# update-initramfs -k all -u update-initramfs: Generating /boot/initrd.img-5.15.64-1-pve mkinitramfs: failed to determine device for / mkinitramfs: workaround is MODULES=most, check: grep -r MODULES /etc/initramfs-tools Error please report bug on initramfs-tools Include the output of 'mount' and 'cat /proc/mounts' update-initramfs: failed for /boot/initrd.img-5.15.64-1-pve with 1. root@pve:/etc/initramfs-tools/conf.d# vi /etc/initramfs-tools/conf.d/modules root@pve:/etc/initramfs-tools/conf.d# update-initramfs -k all -u update-initramfs: Generating /boot/initrd.img-5.15.64-1-pve W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast Running hook script 'zz-proxmox-boot'.. Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace.. No /etc/kernel/cmdline found - falling back to /proc/cmdline Copying and configuring kernels on /dev/disk/by-uuid/68B1-9000 Copying kernel 5.15.39-1-pve Copying kernel 5.15.64-1-pve Copying kernel 5.4.189-1-pve cp: error writing '/var/tmp/espmounts/68B1-9000/initrd.img-5.4.189-1-pve': No space left on device run-parts: /etc/initramfs/post-update.d//proxmox-boot-sync exited with return code 1

NOTE i am pretty sure there is NO problems with disk space

root@pve:/# du --max-depth=1 -h 7.4M ./etc 3.5M ./run 647M ./home 4.6G ./usr 40K ./media 512 ./bpool 629G ./apool-0 2.4T ./cpool 512 ./srv 2.0K ./rpool 512 ./opt 0 ./sys 34K ./tmp 599M ./boot 2.0K ./mnt 512 ./apool 5.1G ./var 121K ./root 43M ./dev du: cannot access './proc/1556508/task/1556508/fd/3': No such file or directory du: cannot access './proc/1556508/task/1556508/fdinfo/3': No such file or directory du: cannot access './proc/1556508/fd/4': No such file or directory du: cannot access './proc/1556508/fdinfo/4': No such file or directory du: cannot access './proc/1603221': No such file or directory du: cannot access './proc/1609159': No such file or directory du: cannot access './proc/1609166': No such file or directory du: cannot access './proc/1609168': No such file or directory du: cannot access './proc/1611606': No such file or directory du: cannot access './proc/1611610': No such file or directory du: cannot access './proc/1612544': No such file or directory du: cannot access './proc/1613300': No such file or directory du: cannot access './proc/1613301': No such file or directory du: cannot access './proc/1613303': No such file or directory du: cannot access './proc/1613414': No such file or directory 0 ./proc 3.0T . root@pve:/# df -h Filesystem Size Used Avail Use% Mounted on udev 16G 0 16G 0% /dev tmpfs 3.2G 3.3M 3.2G 1% /run rpool/ROOT/pve-1 104G 11G 93G 11% / tmpfs 16G 52M 16G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock rpool 93G 128K 93G 1% /rpool rpool/ROOT 93G 128K 93G 1% /rpool/ROOT rpool/data 93G 128K 93G 1% /rpool/data cpool 362G 132G 230G 37% /cpool cpool/backups 232G 2.1G 230G 1% /cpool/backups cpool/cloud 405G 175G 230G 44% /cpool/cloud cpool/pve 345G 115G 230G 34% /cpool/pve cpool/share 2.2T 2.0T 230G 90% /cpool/share cpool/backups/zfs-autobackup 230G 256K 230G 1% /cpool/backups/zfs-autobackup cpool/backups/zfs-backup 230G 256K 230G 1% /cpool/backups/zfs-backup tmpfs 3.2G 64K 3.2G 1% /run/user/117 apool-0 1.3T 150G 1.2T 12% /apool-0 apool-0/share 1.2T 11G 1.2T 1% /apool-0/share apool-0/pve 1.2T 33G 1.2T 3% /apool-0/pve apool-0/incoming 1.2T 6.3G 1.2T 1% /apool-0/incoming apool-0/backups 1.2T 256K 1.2T 1% /apool-0/backups apool-0/pve/subvol-110-disk-0 8.0G 1.4G 6.7G 18% /apool-0/pve/subvol-110-disk-0 apool-0/pve/subvol-143-disk-1 8.0G 2.8G 5.3G 35% /apool-0/pve/subvol-143-disk-1 apool-0/pve/subvol-104-disk-0 8.0G 1.5G 6.6G 19% /apool-0/pve/subvol-104-disk-0 apool-0/pve/subvol-101-disk-0 8.0G 779M 7.3G 10% /apool-0/pve/subvol-101-disk-0 apool-0/pve/basevol-1001-disk-0 5.0G 533M 4.5G 11% /apool-0/pve/basevol-1001-disk-0 apool-0/pve/subvol-105-disk-0 8.0G 2.4G 5.7G 30% /apool-0/pve/subvol-105-disk-0 apool-0/pve/subvol-128-disk-0 4.0G 528M 3.5G 13% /apool-0/pve/subvol-128-disk-0 apool-0/pve/basevol-2002-disk-0 8.0G 615M 7.4G 8% /apool-0/pve/basevol-2002-disk-0 apool-0/pve/subvol-107-disk-0 5.0G 1.2G 3.9G 23% /apool-0/pve/subvol-107-disk-0 apool-0/pve/subvol-102-disk-0 8.0G 1.7G 6.4G 22% /apool-0/pve/subvol-102-disk-0 apool-0/pve/subvol-114-disk-0 4.0G 1.2G 2.9G 29% /apool-0/pve/subvol-114-disk-0 apool-0/pve/subvol-117-disk-0 8.0G 861M 7.2G 11% /apool-0/pve/subvol-117-disk-0 apool-0/pve/subvol-103-disk-0 8.0G 901M 7.2G 11% /apool-0/pve/subvol-103-disk-0 apool-0/pve/basevol-2000-disk-0 8.0G 458M 7.6G 6% /apool-0/pve/basevol-2000-disk-0 apool-0/pve/basevol-2001-disk-0 8.0G 657M 7.4G 9% /apool-0/pve/basevol-2001-disk-0 apool-0/pve/subvol-108-disk-0 5.0G 1.1G 4.0G 21% /apool-0/pve/subvol-108-disk-0 apool-0/pve/subvol-135-disk-0 5.0G 2.7G 2.4G 54% /apool-0/pve/subvol-135-disk-0 apool-0/backups/burptemp2 1.2T 16G 1.2T 2% /apool-0/backups/burptemp2 apool-0/pve/subvol-109-disk-0 8.0G 418M 7.6G 6% /apool-0/pve/subvol-109-disk-0 apool-0/pve/subvol-1000-disk-0 5.0G 536M 4.5G 11% /apool-0/pve/subvol-1000-disk-0 apool-0/pve/subvol-145-disk-0 8.0G 3.6G 4.5G 45% /apool-0/pve/subvol-145-disk-0 apool-0/pve/subvol-220401-disk-0 8.0G 470M 7.6G 6% /apool-0/pve/subvol-220401-disk-0 apool-0/backups/burp 1.6T 389G 1.2T 26% /apool-0/backups/burp tmpfs 3.2G 60K 3.2G 1% /run/user/3000 tmpfs 3.2G 56K 3.2G 1% /run/user/0 /dev/fuse 128M 36K 128M 1% /etc/pve

UPDATE

I also have a bunch of these in my dmesg not sure if it is related or not

[4340213.007817] audit: type=1400 audit(1667785160.201:5483): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-135_</var/lib/lxc>" name="/run/systemd/unit-root/" pid=1635318 comm="(ionclean)" srcname="/" flags="rw, rbind"]
 
Last edited:
Thank you very much for your prompt and insitefull reply, you were correct esp partitions were full :)

Also very sorry it took so long to post my results here, especially since i received help so quickly. However it was a very busy week and i finally got a chance to fix this issue.

I mainly followed this link as posted above with a few noted extras as i tried and failed a few times because i was not able to free enough space on the partitions at first.

https://forum.proxmox.com/threads/lvm-thin-blocks-boot-and-now-is-missing.114540/#post-498360

So in addition to removing some kernels in /tmp/myesp0 and /tmp/myesp1 directory i also had to removed some kernels in the EFI/proxmox directory as well. Thanks again as that got the dist-upgrade to complete and tested things with a successful reboot. Lastly i will maybe clean things up further on those partitions as they are still quite full and i think i can remove a few more kernels especially from the proxmox directory

I will post any further cleanup notes here after i test them, but for now the problem is solved and the system is rebooting reliably again.
 
  • Like
Reactions: Neobin
Glad to hear, that all worked out well.

Lastly i will maybe clean things up further on those partitions as they are still quite full and i think i can remove a few more kernels especially from the proxmox directory

Since PVE 6, apt autoremove or apt autoremove --purge takes also care of the PVE-kernels: [1]. So I would highly recommend, running it on a regular basis.

You might need to manually remove any older leftover kernel (from before the mentioned introduction) one time. apt remove ... or apt purge ... should be sufficient for this, since there is/should be now again enough free space to work with.

[1] https://forum.proxmox.com/threads/clean-old-kernels.42040/#post-257792
 
  • Like
Reactions: leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!