Cluster Communication - how to set a specific interface?

mgabriel

Renowned Member
Jul 28, 2011
111
14
83
Saarbrücken, Germany
www.inett.de
I can't find any solution to route cluster communication traffic (e.g. corosync) through a specific interface. On one installation I see the corosync retransmitting frames and I want to route that traffic through a specific link to prevent side affects coming from bandwidth problems.

any ideas?

Thanks,
Marco
 
The hostname is resolved using /etc/hosts. This address/network is used for cluster communication.
 
Unfortunately, this does not work.

We have a dedicated cluster link between two nodes, the nodes are resolved in the /etc/hosts to the IPs of this dedicated link. If I ping these IPs or the names, I can see iptraf counting packets on those interfaces. If I stop pinging, there is no activity on this link.

I checked network binding and saw, that corosync still uses the IPs of the production LAN (192.168.111.0/24) and not the IPs of the cluster link (192.168.222.0/24):

Code:
udp        0      0 192.168.111.9:5404      0.0.0.0:*                           3209/corosync
udp        0      0 192.168.111.9:5405      0.0.0.0:*                           3209/corosync
udp        0      0 239.192.188.113:5405    0.0.0.0:*                           3209/corosync

So, the question remains the same: How can I set up cman/corosync to use a specific link?

Thanks,
Marco
 
the answer is still the same. if it does not work on your side, then there is something wrong on your side. did your reboot all nodes after the changes?
 
Yes - both nodes were rebooted.

Here are the /etc/network/interfaces of both nodes:

Node Obelix:
Code:
root@obelix:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


iface eth2 inet manual


iface eth3 inet manual


iface eth4 inet manual


iface eth5 inet manual


auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_miimon 100
        bond_mode 802.3ad


auto bond1
iface bond1 inet manual
        slaves eth4 eth5
        bond_miimon 100
        bond_mode 802.3ad


auto bond2
iface bond2 inet static
        address  192.168.222.20
        netmask  255.255.255.0
        slaves eth2 eth3
        bond_miimon 100
        bond_mode 802.3ad
        bond_xmit_hash_policy layer3+4


auto vmbr0
iface vmbr0 inet static
        address  192.168.111.9
        netmask  255.255.255.0
        gateway  192.168.111.253
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0


auto vmbr1
iface vmbr1 inet static
        address  10.10.99.4
        netmask  255.255.255.0
        bridge_ports bond1
        bridge_stp off
        bridge_fd 0

Node Asterix:
Code:
# network interface settings
auto lo
iface lo inet loopback


iface eth0 inet manual


iface eth1 inet manual


iface eth2 inet manual


iface eth3 inet manual


iface eth4 inet manual


iface eth5 inet manual


auto bond0
iface bond0 inet manual
        slaves eth0 eth1
        bond_miimon 100
        bond_mode 802.3ad


auto bond1
iface bond1 inet manual
        slaves eth2 eth3
        bond_miimon 100
        bond_mode 802.3ad


auto bond2
iface bond2 inet static
        address  192.168.222.10
        netmask  255.255.255.0
        slaves eth4 eth5
        bond_miimon 100
        bond_mode 802.3ad
        bond_xmit_hash_policy layer3+4


auto vmbr0
iface vmbr0 inet static
        address  192.168.111.8
        netmask  255.255.255.0
        gateway  192.168.111.253
        bridge_ports bond0
        bridge_stp off
        bridge_fd 0


auto vmbr1
iface vmbr1 inet static
        address  10.10.99.2
        netmask  255.255.255.0
        bridge_ports bond1
        bridge_stp off
        bridge_fd 0

cluster.conf on both nodes:
Code:
<?xml version="1.0"?>
<cluster name="pvecluster" config_version="4">
  <cman keyfile="/var/lib/pve-cluster/corosync.authkey">
  </cman>
  <clusternodes>
  <clusternode name="asterix" votes="1" nodeid="1"/>
  <clusternode name="obelix" votes="1" nodeid="2"/></clusternodes>
</cluster>

Best regards,
Marco
 
and you /etc/hosts settings?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!