Hello! Good Morning.
I have a problem with Proxmox in version 4.0-50 / d3a6b7e5 (running kernel: 4.2.1-1-PVE)
I made the mistake of running apt-get upgrade instead of apt-get dist-upgrade. and dependences have been incomplete.
It is a 6-node cluster. and no one can see another.
I can login only on the first node the others give errors like "conection refused" or "invalid username or password"
pvecm status on first node
------------------
Date: Wed Oct 21 10:04:32 2015
Quorum provider: corosync_votequorum
Nodes: 6
Node ID: 0x00000003
Ring ID: 110660
Quorate: Yes
Votequorum information
----------------------
Expected votes: 6
Highest expected: 6
Total votes: 6
Quorum: 4
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000003 1 192.168.130.2 (local)
0x00000002 1 192.168.130.13
0x00000001 1 192.168.130.106
0x00000005 1 192.168.130.194
0x00000004 1 192.168.130.203
0x00000006 1 192.168.130.204
pvecm status on second node
ipcc_send_rec failed: Connection refused
ipcc_send_rec failed: Connection refused
ipcc_send_rec failed: Connection refused
pve configuration filesystem not mounted
errors after runing dist-upgrade
dpkg: error processing package pve-container (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of pve-manager:
pve-manager depends on qemu-server (>= 1.1-1); however:
Package qemu-server is not configured yet.
pve-manager depends on pve-cluster (>= 1.0-29); however:
Package pve-cluster is not configured yet.
pve-manager depends on pve-ha-manager; however:
Package pve-ha-manager is not configured yet.
pve-manager depends on pve-container; however:
Package pve-container is not configured yet.
dpkg: error processing package pve-manager (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
pve-cluster
qemu-server
pve-ha-manager
pve-container
pve-manager
E: Sub-process /usr/bin/dpkg returned an error code (1)
I'm afraid to restart the nodes and not being able to start again vms.(Now they are running)
Thank you very much for your help!
I have a problem with Proxmox in version 4.0-50 / d3a6b7e5 (running kernel: 4.2.1-1-PVE)
I made the mistake of running apt-get upgrade instead of apt-get dist-upgrade. and dependences have been incomplete.
It is a 6-node cluster. and no one can see another.
I can login only on the first node the others give errors like "conection refused" or "invalid username or password"
pvecm status on first node
------------------
Date: Wed Oct 21 10:04:32 2015
Quorum provider: corosync_votequorum
Nodes: 6
Node ID: 0x00000003
Ring ID: 110660
Quorate: Yes
Votequorum information
----------------------
Expected votes: 6
Highest expected: 6
Total votes: 6
Quorum: 4
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000003 1 192.168.130.2 (local)
0x00000002 1 192.168.130.13
0x00000001 1 192.168.130.106
0x00000005 1 192.168.130.194
0x00000004 1 192.168.130.203
0x00000006 1 192.168.130.204
pvecm status on second node
ipcc_send_rec failed: Connection refused
ipcc_send_rec failed: Connection refused
ipcc_send_rec failed: Connection refused
pve configuration filesystem not mounted
errors after runing dist-upgrade
dpkg: error processing package pve-container (--configure):
dependency problems - leaving unconfigured
dpkg: dependency problems prevent configuration of pve-manager:
pve-manager depends on qemu-server (>= 1.1-1); however:
Package qemu-server is not configured yet.
pve-manager depends on pve-cluster (>= 1.0-29); however:
Package pve-cluster is not configured yet.
pve-manager depends on pve-ha-manager; however:
Package pve-ha-manager is not configured yet.
pve-manager depends on pve-container; however:
Package pve-container is not configured yet.
dpkg: error processing package pve-manager (--configure):
dependency problems - leaving unconfigured
Errors were encountered while processing:
pve-cluster
qemu-server
pve-ha-manager
pve-container
pve-manager
E: Sub-process /usr/bin/dpkg returned an error code (1)
I'm afraid to restart the nodes and not being able to start again vms.(Now they are running)
Thank you very much for your help!