Hi
I know that there were some similar issues since the earlier versions of proxmox about this matter and that is why I am going to include the outcome of those commands seen been asked for further assistance.
Issue happened before 2 hours when I tried to update the first node. I let it sit for quite long time and it got stuck at 98%. Following below the outcome of the update procedure.
ssh-ing again to the server and issuing an upgrade command gives me
Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 6729 (apt)
Same happened to the 3rd node I tried to update, got stuck to 98% as well and running upgrade again just gave me a different process id was locking /var/lib/dpkg/lock-frontend.
Some useful output for you (based on previous threads I found)
pveversion -v
....which gives a bunch of not correctly installed
ps waux | grep pveproxy
cat /proc/6729/stack
and finally the long list ....... of ps faxl (ok I have to upload to a txt file for this one since it has to many lines)
When there is a cluster configuration all nodes should be online for updating individually each one of them?
What can I do now except from rebooting and render the cluster useless.
Thank you
I know that there were some similar issues since the earlier versions of proxmox about this matter and that is why I am going to include the outcome of those commands seen been asked for further assistance.
Issue happened before 2 hours when I tried to update the first node. I let it sit for quite long time and it got stuck at 98%. Following below the outcome of the update procedure.
Code:
root@dellprox3:~# apt upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
pve-kernel-5.15.102-1-pve
The following packages have been kept back:
proxmox-ve pve-kernel-helper
The following packages will be upgraded:
libnss-systemd libpam-systemd libpve-access-control libpve-cluster-api-perl libpve-cluster-perl libpve-common-perl
libpve-guest-common-perl libpve-http-server-perl libpve-rs-perl libpve-storage-perl libsystemd0 libudev1
proxmox-widget-toolkit pve-cluster pve-container pve-docs pve-edk2-firmware pve-firewall pve-firmware pve-ha-manager
pve-i18n pve-kernel-5.15 pve-manager pve-qemu-kvm qemu-server systemd systemd-sysv tzdata udev
29 upgraded, 1 newly installed, 0 to remove and 2 not upgraded.
Need to get 229 MB of archives.
After this operation, 408 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libsystemd0 amd64 247.3-7+1-pmx11u1 [376 kB]
Get:2 http://ftp.gr.debian.org/debian bullseye-updates/main amd64 tzdata all 2021a-1+deb11u9 [286 kB]
Get:3 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpam-systemd amd64 247.3-7+1-pmx11u1 [283 kB]
Get:4 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libnss-systemd amd64 247.3-7+1-pmx11u1 [199 kB]
Get:5 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 systemd amd64 247.3-7+1-pmx11u1 [4,501 kB]
Get:6 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 udev amd64 247.3-7+1-pmx11u1 [1,464 kB]
Get:7 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libudev1 amd64 247.3-7+1-pmx11u1 [168 kB]
Get:8 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 systemd-sysv amd64 247.3-7+1-pmx11u1 [113 kB]
Get:9 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-rs-perl amd64 0.7.5 [1,885 kB]
Get:10 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-cluster-api-perl all 7.3-3 [46.2 kB]
Get:11 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 libpve-cluster-perl all 7.3-3 [28.1 kB]
Found initrd image: /boot/initrd.img-5.15.85-1-pve
Found linux image: /boot/vmlinuz-5.15.74-1-pve
Found initrd image: /boot/initrd.img-5.15.74-1-pve
Found memtest86+ image: /ROOT/pve-1@/boot/memtest86+.bin
Found memtest86+ multiboot image: /ROOT/pve-1@/boot/memtest86+_multiboot.bin
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
Adding boot menu entry for UEFI Firmware Settings ...
done
Setting up udev (247.3-7+1-pmx11u1) ...
Setting up pve-i18n (2.11-1) ...
Setting up libpam-systemd:amd64 (247.3-7+1-pmx11u1) ...
Setting up libpve-cluster-perl (7.3-3) ...
Setting up libpve-http-server-perl (4.2-1) ...
Setting up pve-edk2-firmware (3.20230228-1) ...
Setting up pve-kernel-5.15 (7.3-3) ...
Setting up libpve-storage-perl (7.4-2) ...
Setting up libpve-access-control (7.4-2) ...
Setting up libpve-cluster-api-perl (7.3-3) ...
Setting up libpve-guest-common-perl (4.2-4) ...
Setting up pve-firewall (4.3-1) ...
Setting up qemu-server (7.4-3) ...
Setting up pve-container (4.4-3) ...
Setting up pve-ha-manager (3.6.0) ...
watchdog-mux.service is a disabled or a static unit, not starting it.
Setting up pve-manager (7.4-3) ...
Progress: [ 98%] [################################################################################################..]
ssh-ing again to the server and issuing an upgrade command gives me
Waiting for cache lock: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 6729 (apt)
Same happened to the 3rd node I tried to update, got stuck to 98% as well and running upgrade again just gave me a different process id was locking /var/lib/dpkg/lock-frontend.
Some useful output for you (based on previous threads I found)
pveversion -v
Code:
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
pve-manager: not correctly installed (running version: 7.4-3/9002ab8a)
pve-kernel-helper: 7.3-2
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 15.2.17-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: not correctly installed
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: not correctly installed
libpve-apiclient-perl: 3.2-1
libpve-common-perl: not correctly installed
libpve-guest-common-perl: not correctly installed
libpve-http-server-perl: not correctly installed
libpve-rs-perl: not correctly installed
libpve-storage-perl: not correctly installed
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: not correctly installed
lxcfs: not correctly installed
novnc-pve: not correctly installed
proxmox-backup-client: not correctly installed
proxmox-backup-file-restore: not correctly installed
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: not correctly installed
pve-cluster: not correctly installed
pve-container: not correctly installed
pve-docs: not correctly installed
pve-edk2-firmware: not correctly installed
pve-firewall: not correctly installed
pve-firmware: not correctly installed
pve-ha-manager: not correctly installed
pve-i18n: not correctly installed
pve-qemu-kvm: not correctly installed
pve-xtermjs: 4.16.0-1
qemu-server: not correctly installed
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: not correctly installed
vncterm: 1.7-1
zfsutils-linux: not correctly installed
ps waux | grep pveproxy
Code:
www-data 2055 0.0 0.8 352860 144044 ? Ss 12:06 0:00 pveproxy
www-data 25536 0.0 0.8 353376 131688 ? S 12:29 0:00 pveproxy worker
www-data 25537 0.0 0.8 353376 131688 ? S 12:29 0:00 pveproxy worker
www-data 25538 0.0 0.8 353376 131688 ? S 12:29 0:00 pveproxy worker
root 52354 0.0 0.0 6244 644 pts/2 S+ 15:11 0:00 grep pveproxy
cat /proc/6729/stack
Code:
[<0>] do_select+0x57c/0x870
[<0>] core_sys_select+0x1b0/0x3e0
[<0>] do_pselect.constprop.0+0xca/0x170
[<0>] __x64_sys_pselect6+0x5c/0xa0
[<0>] do_syscall_64+0x59/0xc0
[<0>] entry_SYSCALL_64_after_hwframe+0x61/0xcb
and finally the long list ....... of ps faxl (ok I have to upload to a txt file for this one since it has to many lines)
When there is a cluster configuration all nodes should be online for updating individually each one of them?
What can I do now except from rebooting and render the cluster useless.
Thank you
Attachments
Last edited: