Added another node to existing cluster and now permission error (595)

Matthias Looss

Well-Known Member
Jun 30, 2016
36
1
48
San Juan, Puerto Rico
Yesterday I added a fifth node to an existing four node cluster without any issues. I had full quorum and all nodes and votes showed 5. All five do show in green in the GUI, but today I started to create a new KVM on the new node hn7, the icon was green, but my existing NFS share gave me a permission error 1006. Shortly after that my node icons turned red and gave the 595 permission error. I rebooted, verified my /etc/hosts file and I can ping all nodes and my networks switches do support multicast and it is enabled. I found some spots here with this error, but most were caused because of the missing multicast or /etc/hosts entry and I already confirmed that I have all these settings correctly configured on my system.

root@hn7:~# pvecm status

Quorum information

------------------

Date: Tue Oct 11 18:05:45 2016

Quorum provider: corosync_votequorum

Nodes: 5

Node ID: 0x00000001

Ring ID: 3/74272

Quorate: Yes


Votequorum information

----------------------

Expected votes: 5

Highest expected: 5

Total votes: 5

Quorum: 3

Flags: Quorate


Membership information

----------------------

Nodeid Votes Name

0x00000003 1 192.168.101.46

0x00000004 1 192.168.101.189

0x00000002 1 192.168.101.190

0x00000005 1 192.168.101.191

0x00000001 1 192.168.101.192 (local)
 
Yes, I authorized the complete subnet 192.168.101.0/24, I should have mentioned this in my post. All other nodes and the FreeNAS hosting the NFS share are on this same subnet and they an all be accessed? I am also within the limit of maximum servers that are allowed to connect to the NFS share?
 
My 4 switches are brand new Cisco Meraki Model MS320-48FP devices and I just got off the phone with support and they confirmed that multicast is enabled and working. We identified one multicast IP address in a packet capture but when I run this command to get the actual multicast address, I get the error below?

root@hn3:~# corosync-cmapctl -g totem.interface.0.mcastaddr

Can't get key totem.interface.0.mcastaddr. Error CS_ERR_NOT_EXIST
 
  • Like
Reactions: mdream
please post your pveversion ("pveversion -v"), network configuration ("/etc/network/interfaces" , "ip a" and "ip ma"), hosts file and corosync configuration.
 
Code:
root@hn5:~# clear
root@hn5:~# pveversion -v
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-3 (running version: 4.3-3/557191d3)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-1-pve: 4.4.13-56
pve-kernel-4.4.13-2-pve: 4.4.13-58
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-91
pve-firmware: 1.1-9
libpve-common-perl: 4.0-75
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-66
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.2-2
pve-container: 1.0-78
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
root@hn5:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage part of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface eth0 inet manual

iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
        address  192.168.101.190
        netmask  255.255.255.0
        gateway  192.168.101.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
        bridge_vlan_aware yes

root@hn5:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 00:25:90:2b:db:7c brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:25:90:2b:db:7d brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:25:90:2b:db:7c brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.190/24 brd 192.168.101.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::225:90ff:fe2b:db7c/64 scope link
       valid_lft forever preferred_lft forever
root@hn5:~# ip ma
1:      lo
        inet  224.0.0.1
        inet6 ff02::1
        inet6 ff01::1
2:      eth0
        link  01:00:5e:00:00:01
        inet  224.0.0.1
        inet6 ff02::1
        inet6 ff01::1
3:      eth1
        link  33:33:00:00:00:01
        inet6 ff02::1
        inet6 ff01::1
4:      vmbr0
        link  33:33:00:00:00:01
        link  01:00:5e:00:00:01
        link  33:33:ff:2b:db:7c
        link  01:00:5e:40:a5:d9
        inet  239.192.165.217
        inet  224.0.0.1
        inet6 ff02::1:ff2b:db7c
        inet6 ff02::1
        inet6 ff01::1
root@hn5:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.101.190 hn5.capitolsecurity.local hn5
192.168.101.46  hn3.capitolsecurity.local hn3
192.168.101.189 hn4.capitolsecurity.local hn4
192.168.101.191 hn6.capitolsecuirty.local hn6
192.168.101.152 hn2.capitolsecurity.local hn2

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
root@hn5:~#

root@hn5:~# cat /etc/corosync/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: hn5
    nodeid: 2
    quorum_votes: 1
    ring0_addr: hn5
  }

  node {
    name: hn6
    nodeid: 5
    quorum_votes: 1
    ring0_addr: hn6
  }

  node {
    name: hn7
    nodeid: 1
    quorum_votes: 1
    ring0_addr: hn7
  }

  node {
    name: hn4
    nodeid: 4
    quorum_votes: 1
    ring0_addr: hn4
  }

  node {
    name: hn3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: hn3
  }

}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: cspcluster
  config_version: 22
  ip_version: ipv4
  secauth: on
  version: 2
  interface {
    bindnetaddr: 192.168.101.46
    ringnumber: 0
  }

}

root@hn5:~#
 
Opps, just realize I posted details from node hn5 and not hn3.

Here is all the same info for node hn3:

Code:
root@hn3:~# pveversion -v
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-3 (running version: 4.3-3/557191d3)
pve-kernel-4.4.6-1-pve: 4.4.6-48
pve-kernel-4.4.13-2-pve: 4.4.13-58
pve-kernel-4.4.16-1-pve: 4.4.16-64
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-91
pve-firmware: 1.1-9
libpve-common-perl: 4.0-75
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-66
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.2-2
pve-container: 1.0-78
pve-firewall: 2.0-31
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.5-1
lxcfs: 2.0.4-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
root@hn3:~# cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto vmbr0
iface vmbr0 inet static
        address 192.168.101.46
        netmask 255.255.255.0
        gateway 192.168.101.1
        bridge_ports eth0
        bridge_stp off
        bridge_fd 0
root@hn3:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 0c:c4:7a:aa:dc:b0 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0c:c4:7a:aa:dc:b1 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:aa:dc:b0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.101.46/24 brd 192.168.101.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::ec4:7aff:feaa:dcb0/64 scope link
       valid_lft forever preferred_lft forever
5: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether da:a7:f4:85:51:68 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d8a7:f4ff:fe85:5168/64 scope link
       valid_lft forever preferred_lft forever
root@hn3:~# ip ma
1:      lo
        inet  224.0.0.1
        inet6 ff02::1
        inet6 ff01::1
2:      eth0
        link  01:00:5e:00:00:01
        inet  224.0.0.1
        inet6 ff02::1
        inet6 ff01::1
3:      eth1
        link  33:33:00:00:00:01
        inet6 ff02::1
        inet6 ff01::1
4:      vmbr0
        link  33:33:00:00:00:01
        link  01:00:5e:00:00:01
        link  33:33:ff:aa:dc:b0
        link  01:00:5e:40:a5:d9
        inet  239.192.165.217
        inet  224.0.0.1
        inet6 ff02::1:ffaa:dcb0
        inet6 ff02::1
        inet6 ff01::1
5:      tap103i0
        link  33:33:00:00:00:01
        link  01:00:5e:00:00:01
        link  33:33:ff:85:51:68
        inet  224.0.0.1
        inet6 ff02::1:ff85:5168
        inet6 ff02::1
        inet6 ff01::1
root@hn3:~# cat /etc/corosync/corosync.conf
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: hn5
    nodeid: 2
    quorum_votes: 1
    ring0_addr: hn5
  }

  node {
    name: hn6
    nodeid: 5
    quorum_votes: 1
    ring0_addr: hn6
  }

  node {
    name: hn7
    nodeid: 1
    quorum_votes: 1
    ring0_addr: hn7
  }

  node {
    name: hn4
    nodeid: 4
    quorum_votes: 1
    ring0_addr: hn4
  }

  node {
    name: hn3
    nodeid: 3
    quorum_votes: 1
    ring0_addr: hn3
  }

}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: cspcluster
  config_version: 22
  ip_version: ipv4
  secauth: on
  version: 2
  interface {
    bindnetaddr: 192.168.101.46
    ringnumber: 0
  }

}

root@hn3:~#
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!