apt-get update && apt-get dist-upgrade (failed)

Nathan Stratton

Well-Known Member
Dec 28, 2018
44
3
48
49
Setting up pve-cluster (5.0-31) ...
Job for pve-ha-lrm.service failed because the control process exited with error code.
See "systemctl status pve-ha-lrm.service" and "journalctl -xe" for details.
dpkg: error processing package pve-cluster (--configure):
subprocess installed post-installation script returned error exit status 1
Setting up python-rados (12.2.10-pve1) ...
Setting up pve-kernel-4.15 (5.2-12) ...
dpkg: dependency problems prevent configuration of pve-firewall:
pve-firewall depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing package pve-firewall (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libpve-guest-common-perl:
libpve-guest-common-perl depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing package libpve-guest-common-perl (--configure):
dependency problems - leaving unconfigured
Setting up libfdt1:amd64 (1.4.2-1) ...
dpkg: dependency problems prevent configuration of qemu-server:
qemu-server depends on libpve-guest-common-perl (>= 2.0-18); however:
Package libpve-guest-common-perl is not configured yet.
qemu-server depends on pve-cluster; however:
Package pve-cluster is not configured yet.
qemu-server depends on pve-firewall; however:
Package pve-firewall is not configured yet.

dpkg: error processing package qemu-server (--configure):
dependency problems - leaving unconfigured
Setting up pve-libspice-server1 (0.14.1-1) ...
Setting up python-cephfs (12.2.10-pve1) ...
Setting up libradosstriper1 (12.2.10-pve1) ...
dpkg: dependency problems prevent configuration of libpve-storage-perl:
libpve-storage-perl depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing package libpve-storage-perl (--configure):
dependency problems - leaving unconfigured
Setting up librgw2 (12.2.10-pve1) ...
dpkg: dependency problems prevent configuration of pve-manager:
pve-manager depends on libpve-guest-common-perl (>= 2.0-14); however:
Package libpve-guest-common-perl is not configured yet.
pve-manager depends on libpve-storage-perl (>= 5.0-18); however:
Package libpve-storage-perl is not configured yet.
pve-manager depends on pve-cluster (>= 5.0-27); however:
Package pve-cluster is not configured yet.
pve-manager depends on pve-firewall; however:
Package pve-firewall is not configured yet.
pve-manager depends on qemu-server (>= 5.0-24); however:
Package qemu-server is not configured yet.

dpkg: error processing package pve-manager (--configure):
dependency problems - leaving unconfigured
Setting up python-rgw (12.2.10-pve1) ...
dpkg: dependency problems prevent configuration of libpve-access-control:
libpve-access-control depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing package libpve-access-control (--configure):
dependency problems - leaving unconfigured
Setting up librbd1 (12.2.10-pve1) ...
Setting up python-rbd (12.2.10-pve1) ...
dpkg: dependency problems prevent configuration of pve-container:
pve-container depends on libpve-guest-common-perl; however:
Package libpve-guest-common-perl is not configured yet.
pve-container depends on libpve-storage-perl (>= 5.0-31); however:
Package libpve-storage-perl is not configured yet.
pve-container depends on pve-cluster (>= 4.0-8); however:
Package pve-cluster is not configured yet.

dpkg: error processing package pve-container (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent processing triggers for pve-ha-manager:
pve-ha-manager depends on pve-cluster (>= 3.0-17); however:
Package pve-cluster is not configured yet.
pve-ha-manager depends on qemu-server; however:
Package qemu-server is not configured yet.

dpkg: error processing package pve-ha-manager (--configure):
dependency problems - leaving triggers unprocessed
Setting up ceph-common (12.2.10-pve1) ...
Setting system user ceph properties..usermod: no changes
..done
Fixing /var/run/ceph ownership....done
Setting up pve-qemu-kvm (2.12.1-1) ...
Setting up ceph-base (12.2.10-pve1) ...
Setting up ceph-mgr (12.2.10-pve1) ...
Setting up ceph-osd (12.2.10-pve1) ...
Setting up ceph-mon (12.2.10-pve1) ...
Setting up ceph (12.2.10-pve1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Errors were encountered while processing:
pve-cluster
pve-firewall
libpve-guest-common-perl
qemu-server
libpve-storage-perl
pve-manager
libpve-access-control
pve-container
pve-ha-manager
E: Sub-process /usr/bin/dpkg returned an error code (1)


root@virt0:~# pveversion -v
proxmox-ve: not correctly installed (running kernel: 4.15.18-1-pve)
pve-manager: not correctly installed (running version: 5.3-6/37b3c8df)
pve-kernel-4.15: 5.2-12
pve-kernel-4.13: 5.2-2
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-1-pve: 4.15.18-19
pve-kernel-4.15.17-3-pve: 4.15.17-14
pve-kernel-4.13.16-4-pve: 4.13.16-51
pve-kernel-4.13.16-3-pve: 4.13.16-50
pve-kernel-4.13.16-1-pve: 4.13.16-46
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-2-pve: 4.13.13-33
ceph: 12.2.10-pve1
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: not correctly installed
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-43
libpve-guest-common-perl: not correctly installed
libpve-http-server-perl: 2.0-11
libpve-storage-perl: not correctly installed
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.0.2+pve1-5
lxcfs: 3.0.2-2
novnc-pve: 1.0.0-2
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-22
pve-cluster: not correctly installed
pve-container: not correctly installed
pve-docs: 5.3-1
pve-edk2-firmware: 1.20181023-1
pve-firewall: not correctly installed
pve-firmware: 2.0-6
pve-ha-manager: not correctly installed
pve-i18n: 1.0-9
pve-libspice-server1: 0.14.1-1
pve-qemu-kvm: 2.12.1-1
pve-xtermjs: 1.0-5
qemu-server: not correctly installed
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.12-pve1~bpo1
 
Job for pve-ha-lrm.service failed because the control process exited with error code.
See "systemctl status pve-ha-lrm.service" and "journalctl -xe" for details.
Seems this is the root cause of your problems - what's the output of both commands indicated above?

Once the problem is resolved try running `apt-get install -f` and/or `dpkg --configure -a`
 
root@virt0:~# systemctl status pve-ha-lrm.service
● pve-ha-lrm.service - PVE Local HA Ressource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2018-12-28 14:01:08 EST; 2min 40s ago
Process: 356360 ExecStart=/usr/sbin/pve-ha-lrm start (code=exited, status=111)
Main PID: 4521 (code=exited, status=0/SUCCESS)
CPU: 590ms

Dec 28 14:01:08 virt0 pve-ha-lrm[356360]: ipcc_send_rec[1] failed: Connection refused
Dec 28 14:01:08 virt0 pve-ha-lrm[356360]: ipcc_send_rec[2] failed: Connection refused
Dec 28 14:01:08 virt0 pve-ha-lrm[356360]: ipcc_send_rec[2] failed: Connection refused
Dec 28 14:01:08 virt0 pve-ha-lrm[356360]: ipcc_send_rec[3] failed: Connection refused
Dec 28 14:01:08 virt0 pve-ha-lrm[356360]: ipcc_send_rec[3] failed: Connection refused
Dec 28 14:01:08 virt0 pve-ha-lrm[356360]: Unable to load access control list: Connection refused
Dec 28 14:01:08 virt0 systemd[1]: pve-ha-lrm.service: Control process exited, code=exited status=111
Dec 28 14:01:08 virt0 systemd[1]: Failed to start PVE Local HA Ressource Manager Daemon.
Dec 28 14:01:08 virt0 systemd[1]: pve-ha-lrm.service: Unit entered failed state.
Dec 28 14:01:08 virt0 systemd[1]: pve-ha-lrm.service: Failed with result 'exit-code'.
root@virt0:~# journalctl -xe
Dec 28 14:03:53 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:53 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:53 virt0 corosync[356324]: notice [QUORUM] Members[11]: 1 2 3 4 5 6 8 9 10 11 12
Dec 28 14:03:53 virt0 corosync[356324]: notice [MAIN ] Completed service synchronization, ready to provide service.
Dec 28 14:03:53 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:53 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:53 virt0 corosync[356324]: [QUORUM] Members[11]: 1 2 3 4 5 6 8 9 10 11 12
Dec 28 14:03:53 virt0 corosync[356324]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 28 14:03:55 virt0 corosync[356324]: notice [TOTEM ] A new membership (10.88.64.120:18763092) was formed. Members
Dec 28 14:03:55 virt0 corosync[356324]: [TOTEM ] A new membership (10.88.64.120:18763092) was formed. Members
Dec 28 14:03:55 virt0 corosync[356324]: warning [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: warning [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: warning [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: warning [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: warning [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:55 virt0 corosync[356324]: notice [QUORUM] Members[11]: 1 2 3 4 5 6 8 9 10 11 12
Dec 28 14:03:55 virt0 corosync[356324]: notice [MAIN ] Completed service synchronization, ready to provide service.
Dec 28 14:03:55 virt0 corosync[356324]: [QUORUM] Members[11]: 1 2 3 4 5 6 8 9 10 11 12
Dec 28 14:03:55 virt0 corosync[356324]: [MAIN ] Completed service synchronization, ready to provide service.
Dec 28 14:03:56 virt0 corosync[356324]: notice [TOTEM ] A new membership (10.88.64.120:18763096) was formed. Members
Dec 28 14:03:56 virt0 corosync[356324]: [TOTEM ] A new membership (10.88.64.120:18763096) was formed. Members
Dec 28 14:03:56 virt0 corosync[356324]: warning [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: warning [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: warning [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: warning [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: notice [QUORUM] Members[11]: 1 2 3 4 5 6 8 9 10 11 12
Dec 28 14:03:56 virt0 corosync[356324]: notice [MAIN ] Completed service synchronization, ready to provide service.
Dec 28 14:03:56 virt0 corosync[356324]: [CPG ] downlist left_list: 0 received
Dec 28 14:03:56 virt0 corosync[356324]: [QUORUM] Members[11]: 1 2 3 4 5 6 8 9 10 11 12
Dec 28 14:03:56 virt0 corosync[356324]: [MAIN ] Completed service synchronization, ready to provide service.
 
* Is your cluster in a good state (`pvecm status`)?
* can you restart pve-ha-lrm (`systemctl restart pve-ha-lrm.service)?
 
So it was in a good state before the upgrade, now I get:

root@virt0:~# pvecm status
ipcc_send_rec[1] failed: Connection refused
ipcc_send_rec[2] failed: Connection refused
ipcc_send_rec[3] failed: Connection refused
Unable to load access control list: Connection refused
root@virt0:~# systemctl restart pve-ha-lrm.service
Job for pve-ha-lrm.service failed because the control process exited with error code.
See "systemctl status pve-ha-lrm.service" and "journalctl -xe" for details.
root@virt0:~# systemctl status pve-ha-lrm.service
● pve-ha-lrm.service - PVE Local HA Ressource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2018-12-28 14:09:22 EST; 13s ago
Process: 356861 ExecStart=/usr/sbin/pve-ha-lrm start (code=exited, status=111)
Main PID: 4521 (code=exited, status=0/SUCCESS)
CPU: 566ms

Dec 28 14:09:22 virt0 pve-ha-lrm[356861]: ipcc_send_rec[1] failed: Connection refused
Dec 28 14:09:22 virt0 pve-ha-lrm[356861]: ipcc_send_rec[2] failed: Connection refused
Dec 28 14:09:22 virt0 pve-ha-lrm[356861]: ipcc_send_rec[2] failed: Connection refused
Dec 28 14:09:22 virt0 pve-ha-lrm[356861]: ipcc_send_rec[3] failed: Connection refused
Dec 28 14:09:22 virt0 pve-ha-lrm[356861]: ipcc_send_rec[3] failed: Connection refused
Dec 28 14:09:22 virt0 pve-ha-lrm[356861]: Unable to load access control list: Connection refused
Dec 28 14:09:22 virt0 systemd[1]: pve-ha-lrm.service: Control process exited, code=exited status=111
Dec 28 14:09:22 virt0 systemd[1]: Failed to start PVE Local HA Ressource Manager Daemon.
Dec 28 14:09:22 virt0 systemd[1]: pve-ha-lrm.service: Unit entered failed state.
Dec 28 14:09:22 virt0 systemd[1]: pve-ha-lrm.service: Failed with result 'exit-code'.
root@virt0:~#

Note, this is what I get on two servers that I tried to upgrade if I run status from another server I have yet to upgrade I get:

root@virt1:~# pvecm status
Quorum information
------------------
Date: Fri Dec 28 14:11:44 2018
Quorum provider: corosync_votequorum
Nodes: 11
Node ID: 0x00000002
Ring ID: 1/18764140
Quorate: Yes

Votequorum information
----------------------
Expected votes: 12
Highest expected: 12
Total votes: 11
Quorum: 7
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.88.64.120
0x00000002 1 10.88.64.121 (local)
0x00000003 1 10.88.64.122
0x00000004 1 10.88.64.123
0x00000005 1 10.88.64.124
0x00000006 1 10.88.64.125
0x00000008 1 10.88.64.127
0x00000009 1 10.88.64.128
0x0000000a 1 10.88.64.129
0x0000000b 1 10.88.64.130
0x0000000c 1 10.88.64.131
 
root@virt0:~# pvecm status
ipcc_send_rec[1] failed: Connection refused
ipcc_send_rec[2] failed: Connection refused
ipcc_send_rec[3] failed: Connection refused
Unable to load access control list: Connection refused
Seems the pve-cluster.service and corosync.service are not running.
Try to restart them and check the logs - you want the output you have on the third node...

before your cluster (and clusterfilesystem) are not running ok, it doesn't help to restart the ha-service
 
root@virt0:~# systemctl restart pve-cluster.service
Job for pve-cluster.service failed because the control process exited with error code.
See "systemctl status pve-cluster.service" and "journalctl -xe" for details.
root@virt0:~# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2018-12-28 14:29:18 EST; 11s ago
Process: 358399 ExecStart=/usr/bin/pmxcfs (code=exited, status=255)
Main PID: 2689 (code=killed, signal=KILL)
CPU: 8ms

Dec 28 14:29:18 virt0 systemd[1]: Starting The Proxmox VE cluster filesystem...
Dec 28 14:29:18 virt0 pmxcfs[358399]: fuse: failed to access mountpoint /etc/pve: Transport endpoint is not connected
Dec 28 14:29:18 virt0 pmxcfs[358399]: [main] crit: fuse_mount error: Transport endpoint is not connected
Dec 28 14:29:18 virt0 pmxcfs[358399]: [main] crit: fuse_mount error: Transport endpoint is not connected
Dec 28 14:29:18 virt0 pmxcfs[358399]: [main] notice: exit proxmox configuration filesystem (-1)
Dec 28 14:29:18 virt0 pmxcfs[358399]: [main] notice: exit proxmox configuration filesystem (-1)
Dec 28 14:29:18 virt0 systemd[1]: pve-cluster.service: Control process exited, code=exited status=255
Dec 28 14:29:18 virt0 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Dec 28 14:29:18 virt0 systemd[1]: pve-cluster.service: Unit entered failed state.
Dec 28 14:29:18 virt0 systemd[1]: pve-cluster.service: Failed with result 'exit-code'.
root@virt0:~#
 
corosync needs to run before pmxcfs (proxmox cluster filesystem - https://pve.proxmox.com/pve-docs/chapter-pmxcfs.html )!
also it seems that you have a hanging fuse-mount of pmxcfs

try `fusermount -u /etc/pve`, then restart corosync (and make sure it starts correctly), then try restarting pve-cluster
 
root@virt0:~# fusermount -u /etc/pve
fusermount: failed to unmount /etc/pve: Device or resource busy

Do I need to reboot? I have VMs I did not want to kill so I have been avoiding that.
 
fusermount: failed to unmount /etc/pve: Device or resource busy
something is still having an open filehandle in `/etc/pve` - you can check what it is with `fuser` or `lsof` (check their manpages - but for me usually
`lsof -n |grep '/etc/pve'` does the job) - these process need to stop and afterwards you should be able to proceed.

In general a reboot after a while is recommended - especially after a kernel-upgrade (otherwise you still run with a potentially vulnerable kernel)
 
So I eventually realized that my problem with /etc/pve, it would lock up accessing files in that tree. I did some searching and found I could fix it by doing the following on ALL nodes:

systemctl stop pve-cluster
rm -f /var/lib/pve-cluster/.pmxcfs.lockfile


I then started pve-cluster one by one.

systemctl start pve-cluster

That fixed my problem!
 
  • Like
Reactions: shantanu