3 node cluster: wrong netwerk interface usage for replication in dual nic per node

maacarbo

New Member
Jan 9, 2021
1
0
1
42
Hi all ,

I installed a proxmox cluster where every node has 2 nic:
- vmbr0: bridge mode (for the VMs/containers) and link1 (fallback) for the cluster
- enx000ec6705d0f: link0 dedicated for the cluster

When I do a replication of a VM/container, when it start replicating, it always used the bridged mode link to do so.
As I set a nic dedicated to the cluster, I would explect enx000ec6705d0f to be used for the replication (expected dedicated for cluster stuff), but it always uses vmbr0 to do so (taking most of the bandwidth from my VMs/containers).
I also tried it by removing vmbr0 as link1 (fallback) from the cluster configuration, but same result.

I would expect all clustering operations to happen via link0 from the cluster configuration.
Is there something that I can do to "force" that expected behavior?

Some files as info:

/etc/network/interfaces
Code:
auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
    address 10.0.3.2/8
    gateway 10.0.0.1
    bridge_ports eno1
    bridge_stp off
    bridge_fd 0

#iface enx000ec6705d0f inet manual
allow-hotplug enx000ec6705d0f
iface enx000ec6705d0f inet static
    address 192.168.101.12/24
    gateway 192.168.101.1

/etc/pve/corosync.conf
Code:
logging {
  debug: off
  to_syslog: yes
}

nodelist {
  node {
    name: vmsrv02
    nodeid: 1
    quorum_votes: 1
    ring0_addr: 192.168.101.12
    ring1_addr: 10.0.3.2
  }
  node {
    name: vmsrv03
    nodeid: 2
    quorum_votes: 1
    ring0_addr: 192.168.101.13
    ring1_addr: 10.0.3.3
  }
  node {
    name: vmsrv04
    nodeid: 3
    quorum_votes: 1
    ring0_addr: 192.168.101.14
    ring1_addr: 10.0.3.4
  }
}

quorum {
  provider: corosync_votequorum
}

totem {
  cluster_name: vcluster
  config_version: 5
  interface {
    linknumber: 0
  }
  interface {
    linknumber: 1
  }
  ip_version: ipv4-6
  link_mode: passive
  secauth: on
  version: 2
}
 
Last edited: