Man down

maco1717

Active Member
Apr 11, 2017
12
0
41
39
Last Friday I started experiencing some inconsistencies on the cluster.

An overview off my enviornment: (no HA)
[ root @ hq-proxmox-04 ~ ]# pvecm status
Quorum information
------------------
Date: Mon Jul 31 11:32:03 2017
Quorum provider: corosync_votequorum
Nodes: 9
Node ID: 0x00000004
Ring ID: 1/111708
Quorate: Yes

Votequorum information
----------------------
Expected votes: 9
Highest expected: 9
Total votes: 9
Quorum: 5
Flags: Quorate

Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.20.30.228
0x00000002 1 10.20.30.229
0x00000003 1 10.20.30.230
0x00000004 1 10.20.30.231 (local)
0x00000005 1 10.20.30.232
0x00000006 1 10.20.30.233
0x00000007 1 10.20.30.234
0x00000008 1 10.20.30.235
0x00000009 1 10.20.30.236
[ root @ hq-proxmox-04 ~ ]# pveversion -v
proxmox-ve: 4.4-84 (running kernel: 4.4.44-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.44-1-pve: 4.4.44-84
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0.1-1
pve-cluster: 4.0-49
qemu-server: 4.0-110
pve-firmware: 1.1-11
libpve-common-perl: 4.0-94
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-2
pve-docs: 4.4-4
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-97
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80


I access Web UI to node 1 as a normally do and the web was unnacesable, altought SSH and VMs where still working (I realized this last two too late), so I accessed node 2 and suddleny all nodes starter blinking and a matter of seccond nodes went on to off line several times until they stayed all offline, fortuneltly I still had access to management console (web ui) and SSH and VMs where running.

Tried all the obvious stuff and stuff that I could find around on node 1 tried restarting the services
#service pve-cluster restart
#service pvestatd resatrt

And then tried restarting both nodes (1 and 2)

But nothing seamed to pick up.

It's difficult to know what happened next as It has been a few days and everything is starting to blur.

But I tried to upgrade node1 to proxmox 5 which failed (didn't go all the way through), not happy with that, tried to upgrade another node (which had no running VMs).

So my situation now is two nodes where proxmox was upgraded but didnt go trought all the way:
[ root @ hq-proxmox-07 ~ ]# pveversion -v
proxmox-ve: not correctly installed (running kernel: 4.4.67-1-pve)
pve-manager: not correctly installed (running version: 5.0-23/af4267bf)
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.10.17-1-pve: 4.10.17-16
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: not correctly installed
qemu-server: not correctly installed
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: not correctly installed
libpve-access-control: not correctly installed
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-2
pve-container: not correctly installed
pve-firewall: not correctly installed
pve-ha-manager: not correctly installed
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1

Plus a couple node (1 and 2) where I have VMs that I'm unable to turn on or migrate/move to another node.

If I try movien a VM via cli when i triy to browse the node directory
#ls /etc/pve/nodes/NODE/qemu-server/

if would just "freeze" hang there witought doing anything

wheni tried starting one of the VMs on node 2 I get this
TASK ERROR: start failed: command '/usr/bin/kvm -id 104 -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -daemonize -smbios 'type=1,uuid=c842a898-0d71-436f-9bc5-93dd580ee0e2' -name hq-qa-02 -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/104.vnc,x509,password -no-hpet -cpu 'kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,enforce' -m 16384 -k en-us -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:2ef1e0ee357b' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-1.qcow2,if=none,id=drive-sata0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-2.qcow2,if=none,id=drive-sata1,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.1,drive=drive-sata1,id=sata1' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-3.qcow2,if=none,id=drive-sata2,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.2,drive=drive-sata2,id=sata2' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-4.qcow2,if=none,id=drive-sata3,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.3,drive=drive-sata3,id=sata3' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-5.qcow2,if=none,id=drive-sata4,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.4,drive=drive-sata4,id=sata4' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-6.qcow2,if=none,id=drive-sata5,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.5,drive=drive-sata5,id=sata5' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=4A:9C:14:A0:66:5B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: got timeout

Stop
Name

Value

Status
stopped: start failed: command '/usr/bin/kvm -id 104 -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -daemonize -smbios 'type=1,uuid=c842a898-0d71-436f-9bc5-93dd580ee0e2' -name hq-qa-02 -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/104.vnc,x509,password -no-hpet -cpu 'kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,enforce' -m 16384 -k en-us -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:2ef1e0ee357b' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-1.qcow2,if=none,id=drive-sata0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-2.qcow2,if=none,id=drive-sata1,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.1,drive=drive-sata1,id=sata1' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-3.qcow2,if=none,id=drive-sata2,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.2,drive=drive-sata2,id=sata2' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-4.qcow2,if=none,id=drive-sata3,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.3,drive=drive-sata3,id=sata3' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-5.qcow2,if=none,id=drive-sata4,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.4,drive=drive-sata4,id=sata4' -drive 'file=/mnt/pve/HQ_RNFS_01_SSD/images/104/vm-104-disk-6.qcow2,if=none,id=drive-sata5,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-drive,bus=ahci0.5,drive=drive-sata5,id=sata5' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=4A:9C:14:A0:66:5B,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: got timeout
Task type
qmstart
User name
root@pam
Node
hq-proxmox-02
Process ID
7031
Start Time
2017-07-31 15:19:28
Unique task ID
UPID:hq-proxmox-02:00001B77:00046000:597F3C70:qmstart:104:root@pam:

Also remember getting thsi a lot
Jul 28 10:13:10 hq-proxmox-01 pmxcfs[963]: [status] notice: cpg_send_message retried 1 times

What logs would you advise me to have a look at and would you want to get? Also any advise on what could I be looking at?

the last think I did before everything started to go crazy is add a node (that had been delted) but is was a new build. but the it was working find like this over night. but then again it is still "working fine".

Kind regards.
\M
 
Last edited:
Hi,
you have uninstalled some pve components may be they conflict with something.

What repository do you use?
Please send the output of /etc/apt/sources.list.d/* and /etc/apt/sources.list
 
Hi Wolfgang,

Thanks for your reply,

As I mentioned above I tried upgradind a couple boxes to see if this would sort the issues I was having, meanly having the cluster down, but the instalation didn't go trought.

This is the repo files for the "upgraded" boxes
[ root @ hq-proxmox-01 ~ ]# cat /etc/apt/sources.list
#

# deb cdrom:[Debian GNU/Linux 8.6.0 _Jessie_ - Official amd64 NETINST Binary-1 20160917-14:20]/ stretch main

#deb cdrom:[Debian GNU/Linux 8.6.0 _Jessie_ - Official amd64 NETINST Binary-1 20160917-14:20]/ stretch main

deb http://ftp.uk.debian.org/debian/ stretch main
deb-src http://ftp.uk.debian.org/debian/ stretch main

#deb http://security.debian.org/ stretch/updates main
deb-src http://security.debian.org/ stretch/updates main

deb http://ftp.debian.org/debian stretch main contrib

# security updates
deb http://security.debian.org stretch/updates main contrib

# stretch-updates, previously known as 'volatile'
deb http://ftp.uk.debian.org/debian/ stretch-updates main
deb-src http://ftp.uk.debian.org/debian/ stretch-updates main

[ root @ hq-proxmox-01 ~ ]# cat /etc/apt/sources.list.d/*
deb https://enterprise.proxmox.com/debian/pve stretch pve-enterprise
deb http://download.proxmox.com/debian/pve stretch pve-no-subscription

And this is the repo files for the non upgraded boxes
[ root @ hq-proxmox-02 ~ ]# cat /etc/apt/sources.list
#

# deb cdrom:[Debian GNU/Linux 8.6.0 _Jessie_ - Official amd64 NETINST Binary-1 20160917-14:20]/ jessie main

#deb cdrom:[Debian GNU/Linux 8.6.0 _Jessie_ - Official amd64 NETINST Binary-1 20160917-14:20]/ jessie main

deb http://ftp.uk.debian.org/debian/ jessie main
deb-src http://ftp.uk.debian.org/debian/ jessie main

deb http://security.debian.org/ jessie/updates main
deb-src http://security.debian.org/ jessie/updates main

# jessie-updates, previously known as 'volatile'
deb http://ftp.uk.debian.org/debian/ jessie-updates main
deb-src http://ftp.uk.debian.org/debian/ jessie-updates main
[ root @ hq-proxmox-02 ~ ]# cat /etc/apt/sources.list.d/*
deb https://enterprise.proxmox.com/debian jessie pve-enterprise
deb http://download.proxmox.com/debian jessie pve-no-subscription

As a side note would be interesting to know if it possible to rollback an upgrade, or in this case an attempt or upgrade, altought this is no the root of the issue here, It might be an issue, but I decided to upgrade afther having the cluster issues.

One more time, thanks.
\M
 
I've noticed my /etc/hosts was F*ed

some nodes had hostname_domain rather than hostname.domain on the FQDN I've changed that on all 9 nodes could this be/have caused the issues?

\M
 
My cluster is long gone, dont think there is anything else to do.

NOW how do I start moving/restoring VMs (I want to start with the running ones) to a new cluster. dont think I'll be able to do anything on the cluster as is so consider the cluster DEAD.

Thanks.
\M
 
My cluster is long gone, dont think there is anything else to do.

NOW how do I start moving/restoring VMs (I want to start with the running ones) to a new cluster. dont think I'll be able to do anything on the cluster as is so consider the cluster DEAD.

Thanks.
\M
Hi,
normaly you should be able to get the cluster running again.

First - do an backup of /etc/pve on an node (and corosync), which has quorum and save the content on a safe place!
For backup something like
Code:
tar cvf /root/etc_pve.tar /etc/pve
tar cvf /root/corosync.tar /etc/corosync

About the not right upgraded nodes (start with only one!).
What is the output of following commands?
Code:
dpkg -l | grep pve
apt update
apt dist-upgrade
Udo
 
Hi Udo,

Thanks for you reply.

Dont know what happened, but this morning node1 the one I had upgraded started working again, still not being able to show the rest of nodes online while on webUI but that is another matter.

may I ask. would I have been able to backup pve and corosync on any node in the cluster? or do I have to backup each node individually?

Kind regards and thanks.
\M
 
That said, I went ahead and tried to upgrade another node,

This is what I got, which is similar to what was happening on node1:
[ root @ hq-proxmox-08 ~ ]# dpkg -l | grep pve
ii corosync 2.4.2-pve3 amd64 cluster engine daemon and utilities
ii corosync-pve 2.4.2-pve3 all Transitional package.
ii dmeventd 2:1.02.137-pve2 amd64 Linux Kernel Device Mapper event daemon
ii dmsetup 2:1.02.137-pve2 amd64 Linux Kernel Device Mapper userspace library
ii grub-common 2.02-pve6 amd64 GRand Unified Bootloader (common files)
ii grub-pc 2.02-pve6 amd64 GRand Unified Bootloader, version 2 (PC/BIOS version)
ii grub-pc-bin 2.02-pve6 amd64 GRand Unified Bootloader, version 2 (PC/BIOS binaries)
ii grub2-common 2.02-pve6 amd64 GRand Unified Bootloader (common files for version 2)
ii libcfg6:amd64 2.4.2-pve3 amd64 cluster engine CFG library
ii libcmap4:amd64 2.4.2-pve3 amd64 cluster engine CMAP library
ii libcorosync-common4:amd64 2.4.2-pve3 amd64 cluster engine common library
ii libcorosync4-pve 2.4.2-pve3 all Transitional package.
ii libcpg4:amd64 2.4.2-pve3 amd64 cluster engine CPG library
ii libdevmapper-event1.02.1:amd64 2:1.02.137-pve2 amd64 Linux Kernel Device Mapper event support library
ii libdevmapper1.02.1:amd64 2:1.02.137-pve2 amd64 Linux Kernel Device Mapper userspace library
ii liblvm2app2.2:amd64 2.02.168-pve2 amd64 LVM2 application library
ii liblvm2cmd2.02:amd64 2.02.168-pve2 amd64 LVM2 command library
iU libpve-access-control 5.0-5 amd64 Proxmox VE access control library
ii libpve-common-perl 5.0-16 all Proxmox VE base library
iU libpve-guest-common-perl 2.0-11 all Proxmox VE common guest-related modules
ii libpve-http-server-perl 2.0-5 all Proxmox Asynchrounous HTTP Server Implementation
ii libpve-storage-perl 5.0-12 all Proxmox VE storage management library
ii libquorum5:amd64 2.4.2-pve3 amd64 cluster engine Quorum library
ii libtotem-pg5:amd64 2.4.2-pve3 amd64 cluster engine Totem library
ii libvotequorum8:amd64 2.4.2-pve3 amd64 cluster engine Votequorum library
ii lvm2 2.02.168-pve2 amd64 Linux Logical Volume Manager
ii lxc-pve 2.0.8-3 amd64 Linux containers usersapce tools
ii lxcfs 2.0.7-pve2 amd64 LXC userspace filesystem
ii novnc-pve 0.6-4 amd64 HTML5 VNC client
iF pve-cluster 5.0-12 amd64 Cluster Infrastructure for Proxmox Virtual Environment
iU pve-container 2.0-15 all Proxmox VE Container management tool
ii pve-docs 5.0-9 all Proxmox VE Documentation
iU pve-firewall 3.0-2 amd64 Proxmox VE Firewall
iU pve-ha-manager 2.0-2 amd64 Proxmox VE HA Manager
ii pve-kernel-4.4.67-1-pve 4.4.67-90 amd64 The Proxmox PVE Kernel Image
ii pve-libspice-server1 0.12.8-3 amd64 SPICE remote display system server library
iU pve-manager 5.0-23 amd64 Proxmox Virtual Environment Management Tools
ii pve-qemu-kvm 2.9.0-2 amd64 Full virtualization on x86 hardware
[ root @ hq-proxmox-08 ~ ]# apt update
Ign:1 http://ftp.uk.debian.org/debian stretch InRelease
Hit:2 http://security.debian.org stretch/updates InRelease
Hit:3 http://ftp.uk.debian.org/debian stretch-updates InRelease
Hit:4 http://ftp.uk.debian.org/debian stretch Release
Hit:5 http://download.proxmox.com/debian stretch InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
[ root @ hq-proxmox-08 ~ ]# apt dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following packages were automatically installed and are no longer required:
apparmor apt-transport-https attr bridge-utils ceph-common corosync corosync-pve criu cstream dmeventd docutils-common docutils-doc dtach faketime fonts-font-awesome gdisk glusterfs-client glusterfs-common hdparm ifenslave ipset
javascript-common libacl1-dev libaio1 libalgorithm-c3-perl libanyevent-http-perl libanyevent-perl libapparmor-perl libappconfig-perl libapt-pkg-perl libarchive-extract-perl libasound2 libasound2-data libasprintf0c2
libasync-interrupt-perl libasyncns0 libattr1-dev libauthen-pam-perl libb-hooks-endofscope-perl libbabeltrace-ctf1 libbabeltrace1 libbind9-90 libboost-program-options1.62.0 libboost-random1.62.0 libboost-regex1.62.0
libboost-system1.55.0 libboost-thread1.55.0 libboost-thread1.62.0 libc-dev-bin libc6-dev libcaca0 libcephfs1 libcfg6 libclass-c3-perl libclass-c3-xs-perl libclass-method-modifiers-perl libclass-xsaccessor-perl libclone-perl
libcmap4 libcorosync-common4 libcorosync4-pve libcpan-changes-perl libcpan-meta-perl libcpg4 libcrypt-openssl-bignum-perl libcrypt-openssl-random-perl libcrypt-openssl-rsa-perl libdata-optlist-perl libdata-perl-perl
libdata-section-perl libdbi1 libdevel-caller-perl libdevel-cycle-perl libdevel-globaldestruction-perl libdevel-lexalias-perl libdevmapper-event1.02.1 libdirectfb-1.2-9 libdns100 libdw1 libelf1 libev-perl libexporter-tiny-perl
libfaketime libfcgi-bin libfcgi0ldbl libfile-chdir-perl libfile-readbackwards-perl libfile-slurp-perl libfile-sync-perl libfilesys-df-perl libflac8 libgetopt-long-descriptive-perl libgnutlsxx28 libgoogle-perftools4 libguard-perl
libibverbs1 libice6 libimport-into-perl libintl-perl libintl-xs-perl libio-multiplex-perl libio-stringy-perl libipset3 libisc95 libisccc90 libisccfg90 libiscsi4 libiscsi7 libjasper1 libjemalloc1 libjs-extjs libjs-jquery
liblcms2-2 liblinux-inotify2-perl liblist-moreutils-perl liblockfile-simple-perl liblog-agent-perl liblog-message-perl liblog-message-simple-perl liblvm2app2.2 liblvm2cmd2.02 liblwres90 liblzo2-2 libmime-base32-perl
libmodule-build-perl libmodule-implementation-perl libmodule-load-conditional-perl libmodule-pluggable-perl libmodule-runtime-perl libmodule-signature-perl libmoo-perl libmoox-handlesvia-perl libmro-compat-perl
libnamespace-autoclean-perl libnamespace-clean-perl libnet-dbus-perl libnet-dns-perl libnet-ip-perl libnet1 libnetfilter-log1 libnl-3-200 libnl-route-3-200 libnspr4 libnss3 libogg0 libpackage-constants-perl libpackage-stash-perl
libpackage-stash-xs-perl libpadwalker-perl libpaper-utils libpaper1 libparams-classify-perl libparams-util-perl libparams-validate-perl libpath-tiny-perl libperl4-corelibs-perl libpng12-0 libpod-latex-perl libpod-markdown-perl
libpod-readme-perl libprotobuf-c1 libprotobuf10 libprotobuf9 libpth20 libpulse0 libpve-access-control libpve-common-perl libpve-guest-common-perl libpve-http-server-perl libpve-storage-perl libpython2.7 libqb0 libquorum5
librados2 librados2-perl libradosstriper1 librbd1 librdmacm1 libreadline5 libregexp-common-perl librgw2 librole-tiny-perl librrd4 librrd8 librrds-perl libsdl1.2debian libsm6 libsndfile1 libsoftware-license-perl libstatgrab10
libstrictures-perl libstring-shellquote-perl libsub-exporter-perl libsub-exporter-progressive-perl libsub-identify-perl libsub-install-perl libtcmalloc-minimal4 libtemplate-perl libterm-ui-perl libtext-template-perl
libtie-ixhash-perl libtotem-pg5 libtry-tiny-perl libtype-tiny-perl libtype-tiny-xs-perl libunicode-utf8-perl libunwind8 liburcu4 libusbredirparser1 libuuid-perl libvariable-magic-perl libvorbis0a libvorbisenc2 libvotequorum8
libwebp5 libwebp6 libwebpdemux1 libwebpdemux2 libwebpmux1 libwebpmux2 libx11-xcb1 libxapian22 libxml-twig-perl libxml-xpathengine-perl libxslt1.1 libxtst6 linux-libc-dev lvm2 lxc-pve lxcfs lzop manpages-dev novnc-pve numactl
pve-cluster pve-container pve-docs pve-firewall pve-ha-manager pve-libspice-server1 pve-manager pve-qemu-kvm python-blinker python-ceph python-cephfs python-cffi python-cffi-backend python-click python-colorama
python-cryptography python-defusedxml python-docutils python-enum34 python-flask python-idna python-ipaddr python-ipaddress python-itsdangerous python-jinja2 python-markupsafe python-ndg-httpsclient python-openssl python-pil
python-ply python-protobuf python-pyasn1 python-pycparser python-pygments python-pyinotify python-rados python-rbd python-requests python-roman python-simplejson python-soappy python-urllib3 python-werkzeug python-wstools
qemu-server rrdcached rsync smartmontools socat spiceterm sqlite3 thin-provisioning-tools uidmap vncterm x11-common xsltproc
Use 'apt autoremove' to remove them.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
9 not fully installed or removed.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Setting up pve-cluster (5.0-12) ...


Job for pve-cluster.service failed because a timeout was exceeded.
See "systemctl status pve-cluster.service" and "journalctl -xe" for details.
invoke-rc.d: initscript pve-cluster, action "restart" failed.
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Wed 2017-08-02 11:25:15 BST; 4ms ago
Process: 4259 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=killed, signal=TERM)
Process: 4255 ExecStart=/usr/bin/pmxcfs $DAEMON_OPTS (code=exited, status=0/SUCCESS)
Main PID: 4257 (code=killed, signal=KILL)

Aug 02 11:23:35 hq-proxmox-08 pmxcfs[4257]: [status] notice: members: 5/989, 6/1118, 8/4257, 9/0
Aug 02 11:23:35 hq-proxmox-08 pmxcfs[4257]: [status] notice: starting data syncronisation
Aug 02 11:25:05 hq-proxmox-08 systemd[1]: pve-cluster.service: Start-post operation timed out. Stopping.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: State 'stop-sigterm' timed out. Killing.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Killing process 4257 (pmxcfs) with signal SIGKILL.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Killing process 4259 (pvecm) with signal SIGKILL.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Main process exited, code=killed, status=9/KILL
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Unit entered failed state.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Failed with result 'timeout'.
dpkg: error processing package pve-cluster (--configure):
subprocess installed post-installation script returned error exit status 1
dpkg: dependency problems prevent configuration of pve-firewall:
pve-firewall depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing package pve-firewall (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libpve-guest-common-perl:
libpve-guest-common-perl depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing package libpve-guest-common-perl (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of qemu-server:
qemu-server depends on pve-cluster; however:
Package pve-cluster is not configured yet.
qemu-server depends on pve-firewall; however:
Package pve-firewall is not configured yet.
qemu-server depends on libpve-guest-common-perl; however:
Package libpve-guest-common-perl is not configured yet.

dpkg: error processing package qemu-server (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of pve-manager:
pve-manager depends on qemu-server (>= 1.1-1); however:
Package qemu-server is not configured yet.
pve-manager depends on pve-cluster (>= 1.0-29); however:
Package pve-cluster is not configured yet.
pve-manager depends on pve-firewall; however:
Package pve-firewall is not configured yet.

dpkg: error processing package pve-manager (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of libpve-access-control:
libpve-access-control depends on pve-cluster; however:
Package pve-cluster is not configured yet.

dpkg: error processing package libpve-access-control (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of pve-container:
pve-container depends on libpve-guest-common-perl; however:
Package libpve-guest-common-perl is not configured yet.
pve-container depends on pve-cluster (>= 4.0-8); however:
Package pve-cluster is not configured yet.

dpkg: error processing package pve-container (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of pve-ha-manager:
pve-ha-manager depends on pve-cluster (>= 3.0-17); however:
Package pve-cluster is not configured yet.
pve-ha-manager depends on qemu-server; however:
Package qemu-server is not configured yet.

dpkg: error processing package pve-ha-manager (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of librados2-perl:
librados2-perl depends on libpve-access-control; however:
Package libpve-access-control is not configured yet.

dpkg: error processing package librados2-perl (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
pve-cluster
pve-firewall
libpve-guest-common-perl
qemu-server
pve-manager
libpve-access-control
pve-container
pve-ha-manager
librados2-perl
E: Sub-process /usr/bin/dpkg returned an error code (1)

[ root @ hq-proxmox-08 ~ ]# systemctl status pve-cluster.service
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: failed (Result: timeout) since Wed 2017-08-02 11:25:15 BST; 1min 31s ago
Process: 4259 ExecStartPost=/usr/bin/pvecm updatecerts --silent (code=killed, signal=TERM)
Process: 4255 ExecStart=/usr/bin/pmxcfs $DAEMON_OPTS (code=exited, status=0/SUCCESS)
Main PID: 4257 (code=killed, signal=KILL)

Aug 02 11:23:35 hq-proxmox-08 pmxcfs[4257]: [status] notice: members: 5/989, 6/1118, 8/4257, 9/0
Aug 02 11:23:35 hq-proxmox-08 pmxcfs[4257]: [status] notice: starting data syncronisation
Aug 02 11:25:05 hq-proxmox-08 systemd[1]: pve-cluster.service: Start-post operation timed out. Stopping.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: State 'stop-sigterm' timed out. Killing.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Killing process 4257 (pmxcfs) with signal SIGKILL.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Killing process 4259 (pvecm) with signal SIGKILL.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Main process exited, code=killed, status=9/KILL
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: Failed to start The Proxmox VE cluster filesystem.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Unit entered failed state.
Aug 02 11:25:15 hq-proxmox-08 systemd[1]: pve-cluster.service: Failed with result 'timeout'.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!