[SOLVED] dpkg hanging when upgrading PVE kernel

Ovidiu

Renowned Member
Apr 27, 2014
326
13
83
any hints? I'm kinda stumped on how to fix this.

I did an apt-get update followed by apt-get dist-upgrade which kept hanging so I pressed CTRL+C
When running apt-get dist-upgrade again I got
Code:
E: dpkg was interrupted, you must manually run 'dpkg --configure -a' to correct the problem.

but this command also hangs for hours without results:
Code:
dpkg --configure -a
Setting up pve-kernel-5.4.128-1-pve (5.4.128-2) ...
Examining /etc/kernel/postinst.d.
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.4.128-1-pve /boot/vmlinuz-5.4.128-1-pve
run-parts: executing /etc/kernel/postinst.d/initramfs-tools 5.4.128-1-pve /boot/vmlinuz-5.4.128-1-pve
update-initramfs: Generating /boot/initrd.img-5.4.128-1-pve

...

Failed to process /etc/kernel/postinst.d at /var/lib/dpkg/info/pve-kernel-5.4.128-1-pve.postinst line 19.
dpkg: error processing package pve-kernel-5.4.128-1-pve (--configure):
 installed pve-kernel-5.4.128-1-pve package post-installation script subprocess returned error exit status 2
 
hi,

can you show us the output of df -h? maybe something is full
 
thx, I already suspected that but its not the case. I have 2 zfs pools. (I'm skipping a few irrelevant lines here)
Code:
df -h

Filesystem                                                                     Size  Used Avail Use% Mounted on
udev                                                                            63G     0   63G   0% /dev
tmpfs                                                                           13G  1.1G   12G   9% /run
rpool/ROOT/pve-1                                                               528G  8.9G  519G   2% /
tmpfs                                                                           63G   39M   63G   1% /dev/shm
tmpfs                                                                          5.0M     0  5.0M   0% /run/lock
tmpfs                                                                           63G     0   63G   0% /sys/fs/cgroup
rpool                                                                          519G  128K  519G   1% /rpool
rpool/docker                                                                   519G   73M  519G   1% /var/lib/docker
rpool/ROOT                                                                     519G  128K  519G   1% /rpool/ROOT
sixer                                                                           82G  240K   82G   1% /sixer
/dev/fuse                                                                       30M   24K   30M   1% /etc/pve
rpool/data                                                                     519G  256K  519G   1% /rpool/data
sixer/backups                                                                  397G  315G   82G  80% /sixer/backups
sixer/Documents                                                                112G   31G   82G  28% /sixer/Documents
sixer/tmp                                                                      109G   28G   82G  26% /sixer/tmp
sixer/docker                                                                   124G   43G   82G  35% /sixer/docker

rpool basically has 519G free and sixer 82G but sixer only holds data
 
i guess you're booting with ZFS and UEFI?

please check if your ESP is full, see the output of proxmox-boot-tool status

you'll see something like:
Code:
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
74A6-91A9 is configured with: uefi (versions: 5.11.22-2-pve, 5.11.22-3-pve)

then check:
Code:
$ ls -al /dev/disk/by-uuid/74A6-91A9 
lrwxrwxrwx 1 root root 10 Aug 25 10:26 /dev/disk/by-uuid/74A6-91A9 -> ../../sda2

which should give you the ESP partition, then you can mount it: mkdir /tmp/myesp; mount /dev/sda2 /tmp/myesp. afterwards check df -h again to see if /tmp/myesp is full, if it is you can make some space there by deleting the older kernels.

when done, unmount the directory and retry the package upgrade
 
  • Like
Reactions: fireon and larfen
btw. nvme0 and nvme1 are mirrored and this is where PVE runs.
Code:
proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with uefi
C284-A04D is configured with: uefi (versions: 5.4.119-1-pve, 5.4.124-1-pve, 5.4.128-1-pve)
C284-C5E6 is configured with: uefi (versions: 5.4.119-1-pve, 5.4.124-1-pve, 5.4.128-1-pve)

Code:
ls -al /dev/disk/by-uuid/C284-A04D
lrwxrwxrwx 1 root root 15 Jun 30 08:54 /dev/disk/by-uuid/C284-A04D -> ../../nvme0n1p2
Code:
mkdir /tmp/myesp
mount /dev/nvme0n1p2 /tmp/myesp
df -h |grep myesp
/dev/nvme0n1p2                                                                 511M  173M  339M  34% /tmp/myesp

Code:
ls -al /dev/disk/by-uuid/C284-C5E6
lrwxrwxrwx 1 root root 15 Jun 30 08:54 /dev/disk/by-uuid/C284-C5E6 -> ../../nvme1n1p2

and then, while trying to unmount and mount the second location, I must have made a mistake somewhere. Now I can't even do a df -h anymore :-(
Code:
df -h
df: cannot read table of mounted file systems: No such file or directory

I'm a bit lost and scared to reboot as this PVE is running on my NAS which hosts my VPN and my workstation I am writing from right now. And me working remotely for another 2 weeks, I guess I'll just have to leave it hanging like this until I return, unless someone can talk me through what could have gone wrong here.
no idea how I messed this up:
Code:
 mount -a
mount: /proc: wrong fs type, bad option, bad superblock on proc, missing codepage or helper program, or other error.

cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
 
Last edited:
and then, while trying to unmount and mount the second location, I must have made a mistake somewhere. Now I can't even do a df -h anymore :-(
can you check the command history?
 
can you check the command history?
For some reason, it ends like 3h ago?
Code:
ls -al /root/.bash*
-rw------- 1 root root 11177 Aug 25 09:19 /root/.bash_history
but you are on the right track, I'm on a weak connection and I have either copy / pasted something wrong or when reverse searching the bash history with CTRL+R selected something by mistake.
ok, pressing the arrow-up and going through my history, I found the culprit :-/
Code:
umount /dev/nvme0n1p2
how the heck did I unmount one partition of the zfs mirror where the OS is booting from?
I guess I really need to restart
 
for some reason, after a reboot and
Code:
dpkg --configure -a
followed by
Code:
apt --fix-broken install
all is good.
Hello

The same just happened to me.

First, i uninstalled Netdata and then rebooted but restart was not performed correctly because i saw in Proxmox GUI that system was up for 10 days. Also it was very slow to login to Putty.

I gave a reboot command from GUI and after a while server was rebooted alright this time.

Then with "pveupgrade" command i had the error message "E: dpkg was interrupted, you must manually run 'dpkg --configure -a' to correct the problem."

With above commands it was fixed and proceed with update alright.

Thank you