Proxmox Use unicast instead of multicast. NOT WORK.

zystem

New Member
Feb 5, 2013
19
0
1
transport="udpu"
BUT
[TOTEM ] Initializing transport (UDP/IP Multicast).

Node was reboote twice.
And nodes do not see each other.

Code:
pvecm nodes
Node  Sts   Inc   Joined               Name
   1   X      0                        test2
   2   M  13560   2013-02-05 06:31:57  test3
   4   X      0                        test4
Code:
root@test3:/# grep -i totem /var/log/cluster/corosync.log
Feb 05 06:31:57 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Feb 05 06:31:57 corosync [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Feb 05 06:31:57 corosync [TOTEM ] The network interface [192.168.1.2] is now up.
Feb 05 06:31:57 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed.
Code:
root@test3:/# cat /etc/pve/cluster.conf
<cluster config_version="40" name="test">
  <cman>
    keyfile="/var/lib/pve-cluster/corosync.authkey"
    transport="udpu"
  </cman>
  <fencedevices>
    <fencedevice agent="fence_ipmilan" ipaddr="test2.ipmi.local" lanplus="1" login="user" name="ipmi1" passwd="password" power_wait="5"/>
    <fencedevice agent="fence_ipmilan" ipaddr="test3.ipmi.local" lanplus="1" login="user" name="ipmi2" passwd="password" power_wait="5"/>
    <fencedevice agent="fence_ipmilan" ipaddr="test4.ipmi.local" lanplus="1" login="user" name="ipmi3" passwd="password" power_wait="5"/>
  </fencedevices>
  <clusternodes>
    <clusternode name="test2" nodeid="1" votes="1">
      <fence>
        <method name="1">
          <device name="ipmi1"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="test3" nodeid="2" votes="1">
      <fence>
        <method name="1">
          <device name="ipmi2"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="test4" nodeid="3" votes="1">
      <fence>
        <method name="1">
          <device name="ipmi3"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <rm>
    <pvevm autostart="1" vmid="101"/>
    <pvevm autostart="1" vmid="102"/>
  </rm>
</cluster>
Code:
root@test3:/# hosthame
test3

ip 192.168.1.2
Code:
root@test3:/# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
192.168.1.2 test3.local test3 pvelocalhost
192.168.1.1 test2.local test2
192.168.1.3 test4.local test4

# The following lines are desirable for IPv6 capable hosts

::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Code:
pveversion -v
pve-manager: 2.2-32 (pve-manager/2.2/3089a616)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-80
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-34
qemu-server: 2.0-72
pve-firmware: 1.0-21
libpve-common-perl: 1.0-41
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-10
ksm-control-daemon: 1.1-1
 

zystem

New Member
Feb 5, 2013
19
0
1
I find solution.
Section
{noformat}
<cman>
keyfile="/var/lib/pve-cluster/corosync.authkey"
transport="udpu"
</cman>
is wrong.

Mast be
{noformat}
<cman keyfile="/var/lib/pve-cluster/corosync.authkey" transport="udpu"/>
{noformat}
 

zystem

New Member
Feb 5, 2013
19
0
1
I find another solution then unicast.
Commonly, multicasts are unable to work on cisco switches. Commonly, cisco switches is smart switches so they can use VLANs.
It is possible to use VLAN+broadcast instead of multicast.
{noformat}
<cman keyfile="/var/lib/pve-cluster/corosync.authkey" broadcast="yes"/>
{noformat}

P.S. Please add to wiki
 

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,673
426
83
thanks. if you think its a useful information to others, just add it by your own. everybody can add and improve content on the wiki.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!