Migrate vm: "no such cluster node 'mc2-node6' (500)"

HBO

Active Member
Dec 15, 2014
274
15
38
Germany
We want to migrate a vm to node mc2-node6 but we get following error: no such cluster node 'mc2-node6' (500)
Migration to another node works fine.

mc2-node6 in cluster is online, creation of vms works:
pvecm status
Code:
Quorum information
------------------
Date:             Fri Nov 13 12:15:45 2015
Quorum provider:  corosync_votequorum
Nodes:            11
Node ID:          0x0000000b
Ring ID:          456568
Quorate:          Yes


Votequorum information
----------------------
Expected votes:   11
Highest expected: 11
Total votes:      11
Quorum:           6
Flags:            Quorate


Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 10.0.10.10
0x00000002          1 10.0.10.12
0x00000003          1 10.0.10.14
0x00000004          1 10.0.10.15
0x00000005          1 10.0.10.16
0x00000006          1 10.0.10.17
0x00000007          1 10.0.10.21
0x00000008          1 10.0.10.22
0x00000009          1 10.0.10.23
0x0000000a          1 10.0.10.25
0x0000000b          1 10.0.10.26 (local)
 
Last edited:
Nobody an idea?
Is it possible to complete reset the cluster? After rebooting a node we have to readd this node to the cluster, this problem occures after upgrading from pve 3.x to 4.x.
I think something went wrong by upgrading all nodes.
 
"pvecm nodes" shows the problem.
on primary node = mc2-node6
on all other nodes = 10.0.10.26
on mc2-node6 = all ok

a readd with "pvecm add 10.0.10.10 -f" doesn't work.

found something more in /etc/pve/.members.
on first node:
Code:
{"nodename": "proxmox",
"version": 38,
"cluster": { "name": "deltapeak", "version": 28, "nodes": 11, "quorate": 1 },
"nodelist": {
  "mc1-node2": { "id": 2, "online": 1, "ip": "10.0.10.12"},
  "mc1-node4": { "id": 3, "online": 1, "ip": "10.0.10.14"},
  "mc1-node5": { "id": 4, "online": 1, "ip": "10.0.10.15"},
  "mc1-node6": { "id": 5, "online": 1, "ip": "10.0.10.16"},
  "mc1-node7": { "id": 6, "online": 1, "ip": "10.0.10.17"},
  "mc2-node1": { "id": 7, "online": 1, "ip": "10.0.10.21"},
  "mc2-node2": { "id": 8, "online": 1, "ip": "10.0.10.22"},
  "mc2-node3": { "id": 9, "online": 1, "ip": "10.0.10.23"},
  "mc2-node5": { "id": 10, "online": 1, "ip": "10.0.10.25"},
  "mc2-node6": { "id": 11, "online": 1, "ip": "10.0.10.26"},
  "proxmox": { "id": 1, "online": 1, "ip": "10.0.10.10"}
  }
}

on other nodes:
Code:
{
"nodename": "mc1-node2",
"version": 69,
"cluster": { "name": "deltapeak", "version": 27, "nodes": 10, "quorate": 1 },
"nodelist": {
  "mc1-node2": { "id": 2, "online": 1, "ip": "10.0.10.12"},
  "mc1-node4": { "id": 3, "online": 1, "ip": "10.0.10.14"},
  "mc1-node5": { "id": 4, "online": 1, "ip": "10.0.10.15"},
  "proxmox": { "id": 1, "online": 1, "ip": "10.0.10.10"},
  "mc1-node7": { "id": 6, "online": 1, "ip": "10.0.10.17"},
  "mc2-node1": { "id": 7, "online": 1, "ip": "10.0.10.21"},
  "mc2-node2": { "id": 8, "online": 1, "ip": "10.0.10.22"},
  "mc2-node3": { "id": 9, "online": 1, "ip": "10.0.10.23"},
  "mc2-node5": { "id": 10, "online": 1, "ip": "10.0.10.25"},
  "mc1-node6": { "id": 5, "online": 1, "ip": "10.0.10.16"}
  }
}


/etc/hosts entry is correct, someone an idea?
why i got different versions in .members?
 
Last edited:
Problem solved.

We use different domains on our cluster. In dns settings our "search domain" entry was the same on all nodes. After changing this setting to domain name used on cluster node it sync the correct list to /etc/pve/.members
Other solution: Making an entry for every node in /etc/hosts
 
Hello,

I have the same problem. My /etc/hosts file is same on all nodes.

root@HYPER06:/# pvecm nodes

Membership information
----------------------
Nodeid Votes Name
5 1 HYPER05
4 1 HYPER04
6 1 HYPER06 (local)
1 1 HYPER01
2 1 HYPER02
3 1 HYPER03

root@HYPER05:~# pvecm nodes

Membership information
----------------------
Nodeid Votes Name
5 1 HYPER05 (local)
4 1 HYPER04
6 1 HYPER06.demax.bg
1 1 HYPER01
2 1 HYPER02
3 1 HYPER03

root@HYPER06:/# cat /etc/hostname
HYPER06

Why for HYPER06 I have FQDN, how this can be fixed.
 
I have this problem, when remove A-records for nodes from our dns server.
Solution was - making an entry for every node in /etc/hosts. Or configure your dns.
Then reboot problem node
 
Every node is listed in /etc/hosts file. I reinstalled node which caused problem and during installation for FQDN I entered HYPER06.local
Now I have this problem when log on HYPER04. From all other nodes everything is fine. On all nodes in cluster /etc/hosts file is same and
all nodes are listed.
 
Same on all nodes.
What I found is that HYPER06 is missing in /etc/pve/.members

root@HYPER04:~# cat /etc/pve/.members
{
"nodename": "HYPER04",
"version": 55,
"cluster": { "name": "DEMAX", "version": 9, "nodes": 5, "quorate": 1 },
"nodelist": {
"HYPER02": { "id": 2, "online": 1, "ip": "10.10.1.253"},
"HYPER01": { "id": 1, "online": 1, "ip": "10.10.1.252"},
"HYPER03": { "id": 3, "online": 1, "ip": "10.10.1.254"},
"HYPER05": { "id": 5, "online": 1, "ip": "10.10.1.7"},
"HYPER04": { "id": 4, "online": 1, "ip": "10.10.1.13"}
}
}


On all other nodes it's there.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!