"Waiting for quorum" in Proxmox VE 4.0 Beta 2

@kyob what HW you are using?
 
Hi.

I tried the final Proxmox 4.0 with kernel 4.2.2-1 and i have the same problem. I'm a little sad, because Vmware Workstation is the only way i have to test proxmox. :(
 
Same problem here. I'm trying 4.0 Beta2 on real hardware (3 old IBM x3650, switch TL-SG3424). After the "Waiting for quorum" I'm stuck on Votequorum information ---------------------- Expected votes: 2 Highest expected: 2 Total votes: 1 Quorum: 2 Activity blocked
 
I have same problem too

Code:
root@n1:# pvecm add 172.16.1.66
The authenticity of host '172.16.1.66 (172.16.1.66)' can't be established.
ECDSA key fingerprint is da:51:f3:18:40:b0:f0:81:4b:1c:92:a1:22:76:c3:61.
Are you sure you want to continue connecting (yes/no)? yes
root@172.16.1.66's password: 
copy corosync auth key
stopping pve-cluster service
backup old database
waiting for quorum...

multicast is not working:
Code:
n2 :   unicast, seq=1, size=69 bytes, dist=0, time=0.188ms
n1 :   unicast, seq=2, size=69 bytes, dist=0, time=0.188ms
n2 :   unicast, seq=2, size=69 bytes, dist=0, time=0.193ms
n1 :   unicast, seq=3, size=69 bytes, dist=0, time=0.113ms
n2 :   unicast, seq=3, size=69 bytes, dist=0, time=0.138ms
n1 :   unicast, seq=4, size=69 bytes, dist=0, time=0.093ms
n2 :   unicast, seq=4, size=69 bytes, dist=0, time=0.201ms
^C
n1 :   unicast, xmt/rcv/%loss = 4/4/0%, min/avg/max/std-dev = 0.084/0.119/0.188/0.047
n1 : multicast, xmt/rcv/%loss = 4/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000
n2 :   unicast, xmt/rcv/%loss = 4/4/0%, min/avg/max/std-dev = 0.138/0.180/0.201/0.029
n2 : multicast, xmt/rcv/%loss = 4/0/100%, min/avg/max/std-dev = 0.000/0.000/0.000/0.000

Code:
root@n0:~# pveversion -v
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-50 (running version: 4.0-50/d3a6b7e5)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-23
qemu-server: 4.0-31
pve-firmware: 1.1-7
libpve-common-perl: 4.0-32
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-27
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-10
pve-container: 1.0-10
pve-firewall: 2.0-12
pve-ha-manager: 1.0-10
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
root@n0:~# uname -a
Linux n0 4.2.2-1-pve #1 SMP Mon Oct 5 18:23:31 CEST 2015 x86_64 GNU/Linux
 
Last edited:
OK! I solved the problem. If you use the network interface as a bridge, you must turn off multicast_snooping on a bridge interface.
example:
Code:
echo "0" > /sys/class/net/vmbr0/bridge/multicast_snooping


and multicast will work normally:
Code:
n2 :   unicast, seq=1, size=69 bytes, dist=0, time=0.160ms
n2 : multicast, seq=1, size=69 bytes, dist=0, time=0.163ms
n2 :   unicast, seq=2, size=69 bytes, dist=0, time=0.137ms
n2 : multicast, seq=2, size=69 bytes, dist=0, time=0.140ms
n2 :   unicast, seq=3, size=69 bytes, dist=0, time=0.160ms
n2 : multicast, seq=3, size=69 bytes, dist=0, time=0.161ms
^C
n2 :   unicast, xmt/rcv/%loss = 3/3/0%, min/avg/max/std-dev = 0.137/0.152/0.160/0.013
n2 : multicast, xmt/rcv/%loss = 3/3/0%, min/avg/max/std-dev = 0.140/0.155/0.163/0.0