Just another Unicast problem.

Sakis

Active Member
Aug 14, 2013
121
6
38
Hello to Proxmox community.

I am working on Proxmox for one and a half months now and i find it very stable and promissing. My goal is a 3 node HA cluster. The hosting company where i am going to make this project doesnt provide multicast support without paying extra for a private vlan(OVH/Vrack). Budget is tight so i will have to stick with unicast. So back to local testing.

I've red all the unicast related posts and wiki ofc many many times seeking for the missing part of what i am doing wrong. No result. I tested in live machines and localy. It just refuses to work. Unicast editing its easy. I really cant find where is the problem. I will give as much logs i can so if can anyone help me.

Its just a 2 node cluster connection for start.
node1-> unicast1 192.168.1.200
node2-> unicast2 192.168.1.200

ssmping works us it supposed to work just like the wiki

node1
Code:
root@unicast1:~# ifconfig 
eth0      Link encap:Ethernet  HWaddr 52:54:00:ec:46:42  
          inet6 addr: fe80::5054:ff:feec:4642/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13500 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6958 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:18042057 (17.2 MiB)  TX bytes:494700 (483.1 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:22 errors:0 dropped:0 overruns:0 frame:0
          TX packets:22 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1760 (1.7 KiB)  TX bytes:1760 (1.7 KiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vmbr0     Link encap:Ethernet  HWaddr 52:54:00:ec:46:42  
          inet addr:192.168.1.200  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:feec:4642/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13375 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6943 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:17830372 (17.0 MiB)  TX bytes:493734 (482.1 KiB)
````````````````````````````````````````````````````````````````````
root@unicast1:~# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
`````````````````````````````````
root@unicast1:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.200 unicast1.lan unicast1 pvelocalhost
192.168.1.201 unicast2.lan unicast2

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
````````````````````````````````````````
root@unicast1:~# ping unicast2
PING unicast2.lan (192.168.1.201) 56(84) bytes of data.
64 bytes from unicast2.lan (192.168.1.201): icmp_req=1 ttl=64 time=1.12 ms
64 bytes from unicast2.lan (192.168.1.201): icmp_req=2 ttl=64 time=0.323 ms
64 bytes from unicast2.lan (192.168.1.201): icmp_req=3 ttl=64 time=0.175 ms
64 bytes from unicast2.lan (192.168.1.201): icmp_req=4 ttl=64 time=0.243 ms
^C
--- unicast2.lan ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.175/0.466/1.123/0.382 ms

``````````````````````````````````````````

and node2
Code:
root@unicast2:~# ifconfig 
eth0      Link encap:Ethernet  HWaddr 52:54:00:c8:bb:a9  
          inet6 addr: fe80::5054:ff:fec8:bba9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13142 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7219 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:18087354 (17.2 MiB)  TX bytes:514782 (502.7 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00  
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

vmbr0     Link encap:Ethernet  HWaddr 52:54:00:c8:bb:a9  
          inet addr:192.168.1.201  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fec8:bba9/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13008 errors:0 dropped:0 overruns:0 frame:0
          TX packets:7202 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:17887881 (17.0 MiB)  TX bytes:513764 (501.7 KiB)
``````````````````````````````````````````````````````````````````
root@unicast2:~# pveversion -v
proxmox-ve-2.6.32: 3.1-109 (running kernel: 2.6.32-23-pve)
pve-manager: 3.1-3 (running version: 3.1-3/dc0e9b0e)
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-1
pve-cluster: 3.0-7
qemu-server: 3.1-1
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-10
pve-libspice-server1: 0.12.4-1
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
``````````````````````````````````````````````````````````
root@unicast2:~# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.201 unicast2.lan unicast2 pvelocalhost
192.168.1.200 unicast1.lan unicast1
# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
````````````````````````````````````````````````````````````
root@unicast2:~# ping unicast1
PING unicast1.lan (192.168.1.200) 56(84) bytes of data.
64 bytes from unicast1.lan (192.168.1.200): icmp_req=1 ttl=64 time=0.170 ms
64 bytes from unicast1.lan (192.168.1.200): icmp_req=2 ttl=64 time=0.309 ms
64 bytes from unicast1.lan (192.168.1.200): icmp_req=3 ttl=64 time=0.308 ms
^C
--- unicast1.lan ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.170/0.262/0.309/0.066 ms

"search lan" is in both resolv.conf

I create the cluster in first node. Everything ok.
Code:
root@unicast1:~# pvecm status
Version: 6.2.0
Config Version: 1
Cluster Name: proxmox
Cluster Id: 14330
Cluster Member: Yes
Cluster Generation: 8
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1  
Active subsystems: 5
Flags: 
Ports Bound: 0  
Node name: unicast1
Node ID: 1
Multicast addresses: 239.192.55.50 
Node addresses: 192.168.1.200

I edit the config file, i verify it and then i activate it throw web.( img )

After boot multicast changed from 239.192.55.50 to 255.255.255.255

Code:
root@unicast1:~# pvecm status
Version: 6.2.0
Config Version: 2
Cluster Name: proxmox
Cluster Id: 14330
Cluster Member: Yes
Cluster Generation: 12
Membership state: Cluster-Member
Nodes: 1
Expected votes: 1
Total votes: 1
Node votes: 1
Quorum: 1  
Active subsystems: 5
Flags: 
Ports Bound: 0  
Node name: unicast1
Node ID: 1
Multicast addresses: 255.255.255.255 
Node addresses: 192.168.1.200

Then i reboot both nodes.View attachment 1654
I add the node to the cluster and i have the usual problem, "waiting for quorum..."

Here is the syslog.
Code:
Sep  5 17:24:18 unicast1 kernel: eth0: no IPv6 routers present
Sep  5 17:24:18 unicast1 spiceproxy[2726]: starting server
Sep  5 17:24:18 unicast1 spiceproxy[2726]: starting 1 worker(s)
Sep  5 17:24:18 unicast1 spiceproxy[2726]: worker 2727 started
Sep  5 17:24:18 unicast1 pvesh: <root@pam> starting task UPID:unicast1:00000AA5:00000604:52289412:startall::root@pam:
Sep  5 17:24:18 unicast1 pvesh: <root@pam> end task UPID:unicast1:00000AA5:00000604:52289412:startall::root@pam: OK
Sep  5 17:24:26 unicast1 kernel: venet0: no IPv6 routers present
Sep  5 17:26:20 unicast1 pmxcfs[2183]: [dcdb] notice: wrote new cluster config '/etc/cluster/cluster.conf'
Sep  5 17:26:20 unicast1 corosync[2342]:   [QUORUM] Members[1]: 1
Sep  5 17:26:20 unicast1 pmxcfs[2183]: [status] notice: update cluster info (cluster name  proxmox, version = 3)
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] Members Joined:
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:36 unicast1 corosync[2342]:   [CLM   ] Members Joined:
Sep  5 17:26:36 unicast1 corosync[2342]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep  5 17:26:36 unicast1 corosync[2342]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.1.200) ; members(old:1 left:0)
Sep  5 17:26:36 unicast1 corosync[2342]:   [MAIN  ] Completed service synchronization, ready to provide service.
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] Members Joined:
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:38 unicast1 corosync[2342]:   [CLM   ] Members Joined:
Sep  5 17:26:38 unicast1 corosync[2342]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep  5 17:26:38 unicast1 corosync[2342]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.1.200) ; members(old:1 left:0)
Sep  5 17:26:38 unicast1 corosync[2342]:   [MAIN  ] Completed service synchronization, ready to provide service.
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] Members Joined:
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:41 unicast1 corosync[2342]:   [CLM   ] Members Joined:
Sep  5 17:26:41 unicast1 corosync[2342]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep  5 17:26:41 unicast1 corosync[2342]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.1.200) ; members(old:1 left:0)
Sep  5 17:26:41 unicast1 corosync[2342]:   [MAIN  ] Completed service synchronization, ready to provide service.
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] Members Joined:
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:45 unicast1 corosync[2342]:   [CLM   ] Members Joined:
Sep  5 17:26:45 unicast1 corosync[2342]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Sep  5 17:26:45 unicast1 corosync[2342]:   [CPG   ] chosen downlist: sender r(0) ip(192.168.1.200) ; members(old:1 left:0)
Sep  5 17:26:45 unicast1 corosync[2342]:   [MAIN  ] Completed service synchronization, ready to provide service.
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] Members Joined:
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] CLM CONFIGURATION CHANGE
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] New Configuration:
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] #011r(0) ip(192.168.1.200) 
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] Members Left:
Sep  5 17:26:49 unicast1 corosync[2342]:   [CLM   ] Members Joined:
...
and loop again and again the same message

Sorry for long post. I wanted to be clear. If any log needed plzx ask and i will give it. Soon i will update it with real dedicated servers log.
Thanks in advance.
Sakis.transport.png
 
Bump!
Any suggestions?
I am with physical machines now and still cant make it work.
I also edited the /etc/host.conf file just in case, and added the order.
>order hosts,bind
>multi on
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!