Hi,
I have an strange effect - I added the last remaining node (pve06) to an now 7-Node cluster (pve01-07).
All nodes has this verision:
But the last (pve07 + pve06) has also get some updates, which on node pve04 (and pve01,02) is missing:
never the less all hosts show the same version on pveversion?!
Now the strange thing: if I want to migrate an VM from pve04 to pve06 I got following error-messages
The webfrontend from pve04 also don't show pve06 - the webfrontend from pve06 show all nodes!
The cluster is healthy:
and pve06 is in the hosts-file
Due to strange effects on an cluster, where I update an node with running VMs on the weekend (load go up to 20-30) I would upgrade pve04 if I migrate all VMs to other nodes...
Any hints?
Udo
I have an strange effect - I added the last remaining node (pve06) to an now 7-Node cluster (pve01-07).
All nodes has this verision:
Code:
pveversion
pve-manager/4.4-13/7ea56165 (running kernel: 4.4.44-1-pve)
Code:
root@pve04:~# apt list --upgradable
Listing... Done
eject/stable 2.1.5+deb1+cvs20081104-13.1+deb8u1 amd64 [upgradable from: 2.1.5+deb1+cvs20081104-13.1]
libjasper1/stable 1.900.1-debian1-2.4+deb8u3 amd64 [upgradable from: 1.900.1-debian1-2.4+deb8u2]
libqb0/stable 1.0.1-1 amd64 [upgradable from: 1.0-1]
libsmbclient/stable 2:4.2.14+dfsg-0+deb8u5 amd64 [upgradable from: 2:4.2.14+dfsg-0+deb8u4]
libwbclient0/stable 2:4.2.14+dfsg-0+deb8u5 amd64 [upgradable from: 2:4.2.14+dfsg-0+deb8u4]
proxmox-ve/stable 4.4-86 all [upgradable from: 4.4-84]
pve-cluster/stable 4.0-49 amd64 [upgradable from: 4.0-48]
pve-container/stable 1.0-97 all [upgradable from: 1.0-96]
pve-docs/stable 4.4-4 all [upgradable from: 4.4-3]
pve-firmware/stable 1.1-11 all [upgradable from: 1.1-10]
qemu-server/stable 4.0-110 amd64 [upgradable from: 4.0-109]
samba-common/stable 2:4.2.14+dfsg-0+deb8u5 all [upgradable from: 2:4.2.14+dfsg-0+deb8u4]
samba-libs/stable 2:4.2.14+dfsg-0+deb8u5 amd64 [upgradable from: 2:4.2.14+dfsg-0+deb8u4]
smbclient/stable 2:4.2.14+dfsg-0+deb8u5 amd64 [upgradable from: 2:4.2.14+dfsg-0+deb8u4]
vncterm/stable 1.3-2 amd64 [upgradable from: 1.3-1]
Now the strange thing: if I want to migrate an VM from pve04 to pve06 I got following error-messages
Code:
root@pve04:~# qm migrate 400 pve06 --online --with-local-disks
no such cluster node 'pve06'
The cluster is healthy:
Code:
root@pve04:~# pvecm status
Quorum information
------------------
Date: Mon Apr 10 20:04:56 2017
Quorum provider: corosync_votequorum
Nodes: 7
Node ID: 0x00000004
Ring ID: 1/1192
Quorate: Yes
Votequorum information
----------------------
Expected votes: 7
Highest expected: 7
Total votes: 7
Quorum: 4
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.1.2.11
0x00000002 1 10.1.2.12
0x00000003 1 10.1.2.13
0x00000004 1 10.1.2.14 (local)
0x00000005 1 10.1.2.15
0x00000006 1 10.1.2.16
0x00000007 1 10.1.2.17
Code:
root@pve04:~# grep pve06 /etc/hosts
10.1.2.16 pve06.sub.dom.net pve06
Due to strange effects on an cluster, where I update an node with running VMs on the weekend (load go up to 20-30) I would upgrade pve04 if I migrate all VMs to other nodes...
Any hints?
Udo