[SOLVED] no connection between nodes on second bonding-interface

Jul 28, 2020
12
3
8
hello,

i am testing with two proxmox nodes.
Each node has 4 10GB NICs.
I want a seperate network for cluster communication and migration.

My problem:
I can't connect the nodes on the second bonding-interfaces (bond1).
They cannot ping.

Internet works and also the connection between the nodes on bond0 (192.168.105.x).

I also tested the bond-slaves from bond0 on bond1 and reverse.


/etc/network/interfaces on node1:
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface ens1f0np0 inet manual
iface ens1f1np1 inet manual
iface ens3f0np0 inet manual
iface ens3f1np1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves ens1f0np0 ens1f1np1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto bond1
iface bond1 inet static
        address 10.0.0.1/24
        bond-slaves ens3f0np0 ens3f1np1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address 192.168.105.21/24
        gateway 192.168.105.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0

/etc/network/interfaces on node2:
Code:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface ens1f0np0 inet manual
iface ens1f1np1 inet manual
iface ens3f0np0 inet manual
iface ens3f1np1 inet manual

auto bond0
iface bond0 inet manual
        bond-slaves ens1f0np0 ens1f1np1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto bond1
iface bond1 inet static
        address 10.0.0.2/24
        bond-slaves ens3f0np0 ens3f1np1
        bond-miimon 100
        bond-mode 802.3ad
        bond-xmit-hash-policy layer2+3

auto vmbr0
iface vmbr0 inet static
        address 192.168.105.22/24
        gateway 192.168.105.1
        bridge-ports bond0
        bridge-stp off
        bridge-fd 0


ip route on node1:
Code:
default via 192.168.105.1 dev vmbr0 onlink
10.0.0.0/24 dev bond1 proto kernel scope link src 10.0.0.1
192.168.105.0/24 dev vmbr0 proto kernel scope link src 192.168.105.21

ip route on node2:
Code:
default via 192.168.105.1 dev vmbr0 onlink 
10.0.0.0/24 dev bond1 proto kernel scope link src 10.0.0.2 
192.168.105.0/24 dev vmbr0 proto kernel scope link src 192.168.105.22

ip a on node 1:
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens1f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 2c:ea:7f:45:8c:70 brd ff:ff:ff:ff:ff:ff
3: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:ea:7f:47:4f:0f brd ff:ff:ff:ff:ff:ff
4: ens1f1np1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 2c:ea:7f:45:8c:70 brd ff:ff:ff:ff:ff:ff
5: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:ea:7f:47:4f:10 brd ff:ff:ff:ff:ff:ff
6: ens3f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:6d:b3:c0 brd ff:ff:ff:ff:ff:ff
7: ens3f1np1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:6d:b3:c0 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 2c:ea:7f:45:8c:70 brd ff:ff:ff:ff:ff:ff
9: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether bc:97:e1:6d:b3:c0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.1/24 brd 10.0.0.255 scope global bond1
       valid_lft forever preferred_lft forever
    inet6 fe80::be97:e1ff:fe6d:b3c0/64 scope link 
       valid_lft forever preferred_lft forever
10: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2c:ea:7f:45:8c:70 brd ff:ff:ff:ff:ff:ff
    inet 192.168.105.21/24 brd 192.168.105.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::2eea:7fff:fe45:8c70/64 scope link 
       valid_lft forever preferred_lft forever


ip a on node 2:
Code:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:ea:7f:47:74:11 brd ff:ff:ff:ff:ff:ff
3: ens1f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 2c:ea:7f:45:8c:80 brd ff:ff:ff:ff:ff:ff
4: ens1f1np1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000
    link/ether 2c:ea:7f:45:8c:80 brd ff:ff:ff:ff:ff:ff
5: eno2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:ea:7f:47:74:12 brd ff:ff:ff:ff:ff:ff
6: ens3f0np0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:6d:75:e0 brd ff:ff:ff:ff:ff:ff
7: ens3f1np1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1 state UP group default qlen 1000
    link/ether bc:97:e1:6d:75:e0 brd ff:ff:ff:ff:ff:ff
8: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether 2c:ea:7f:45:8c:80 brd ff:ff:ff:ff:ff:ff
9: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether bc:97:e1:6d:75:e0 brd ff:ff:ff:ff:ff:ff
    inet 10.0.0.2/24 brd 10.0.0.255 scope global bond1
       valid_lft forever preferred_lft forever
    inet6 fe80::be97:e1ff:fe6d:75e0/64 scope link 
       valid_lft forever preferred_lft forever
10: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 2c:ea:7f:45:8c:80 brd ff:ff:ff:ff:ff:ff
    inet 192.168.105.22/24 brd 192.168.105.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::2eea:7fff:fe45:8c80/64 scope link 
       valid_lft forever preferred_lft forever
root@px2:~#


anybody any idea?
 
How the two nodes connected ??? back to back or via Switch?
if it is a switch, check lagg status in switch
 
the nodes are connected via Switch (Switch Stack).

when i changed the nics between the 2 bonding interfaces i had the same problem.
so i think the switch is not the problem.


now i used for the bond1 interface the same subnet as vmbr1.
this worked fine but causes another problem when i want migrate a vm/container to another host:

Code:
could not get migration ip: multiple, different, IP address configured for network '192.168.105.21/24'

so i need a seperate network

the route looks ok:
Code:
default via 192.168.105.1 dev vmbr0 onlink
10.0.0.0/24 dev bond1 proto kernel scope link src 10.0.0.1
192.168.105.0/24 dev vmbr0 proto kernel scope link src 192.168.105.21
 

Attachments

  • 20200729-173159.png
    20200729-173159.png
    33.5 KB · Views: 0
  • 20200729-173220.png
    20200729-173220.png
    33.7 KB · Views: 0

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!