[SOLVED] Separate Migration Network issue VLAN

Joe_:)

New Member
Apr 28, 2024
10
0
1
Hi All,

I have a 2 node cluster I have added a second 1g nic to each and created a bond, i have also created a separate vlan for High availability traffic.

When trying to migrate a container i get the following error:

# /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=TEST-NODE01' -o 'UserKnownHostsFile=/etc/pve/nodes/TEST-NODE01/ssh_known_hosts' -o 'GlobalKnownHostsFile=none' root@10.100.1.2 /bin/true

ssh: connect to host 10.100.1.2 port 22: No route to host

ERROR: migration aborted (duration 00:00:03): Can't connect to destination address using public key

TASK ERROR: migration aborted

I am not sure where to start with troubleshooting the issue.
 
How does your network config look like?

Code:
cat /etc/network/interfaces
ip a
ip route show
 
@shanreich thank you for the reply.

Bash:
auto lo
iface lo inet loopback

iface eno2 inet manual

auto enx9cebe8d5db94
iface enx9cebe8d5db94 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.187/24
        gateway 192.168.1.1
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094


auto bond01
iface bond01 inet manual
    bond-slaves enx9cebe8d5db94 enx9cebe8d5db3e
    bond-miimon 100
    bond-mode balance-alb

auto bond01.10
iface bond01.10 inet static
    address 10.100.1.2/29
    gateway 10.100.1.1
    vlan-raw-device bond01


Bash:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether a4:bb:6d:77:69:e2 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
3: enx9cebe8d5db94: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond01 state UP group default qlen 1000
    link/ether 9c:eb:e8:d5:db:94 brd ff:ff:ff:ff:ff:ff
4: wlo1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 84:1b:77:f2:0b:4d brd ff:ff:ff:ff:ff:ff
    altname wlp0s20f3
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a4:bb:6d:77:69:e2 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.187/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::a6bb:6dff:fe77:69e2/64 scope link
       valid_lft forever preferred_lft forever
6: bond01: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9c:eb:e8:d5:db:94 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::9eeb:e8ff:fed5:db94/64 scope link
       valid_lft forever preferred_lft forever
7: bond01.10@bond01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 9c:eb:e8:d5:db:94 brd ff:ff:ff:ff:ff:ff
    inet 10.100.1.2/29 scope global bond01.10
       valid_lft forever preferred_lft forever
    inet6 fe80::9eeb:e8ff:fed5:db94/64 scope link
       valid_lft forever preferred_lft forever
9: tap106i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether fe:3a:ee:79:a2:90 brd ff:ff:ff:ff:ff:ff
12: tap103i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether ae:65:1a:5e:1a:e9 brd ff:ff:ff:ff:ff:ff
13: veth104i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:a4:30:e2:ea:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0


Bash:
ip route show
default via 10.100.1.1 dev bond01.10 proto kernel onlink
10.100.1.0/29 dev bond01.10 proto kernel scope link src 10.100.1.2
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.187
 
Is this from the source or the target? Looks like the target to me (if I'm not mistaken), could you post from the source as well?
 
THis is from the other node.

Bash:
auto lo
iface lo inet loopback

iface enp0s31f6 inet manual

auto enx9cebe8d5db3e
iface enx9cebe8d5db3e inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.188/24
        gateway 192.168.1.1
        bridge-ports enp0s31f6
        bridge-stp off
        bridge-fd 0
auto bond01
iface bond01 inet manual
    bond-slaves enx9cebe8d5db94 enx9cebe8d5db3e
    bond-miimon 100
    bond-mode balance-alb

auto bond01.10
iface bond01.10 inet static
    address 10.100.1.3/29
    gateway 10.100.1.1
    vlan-raw-device bond01


Bash:
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp0s31f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether c0:25:a5:bf:e6:db brd ff:ff:ff:ff:ff:ff
3: enx9cebe8d5db3e: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond01 state UP group default qlen 1000
    link/ether 9c:eb:e8:d5:db:3e brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether c0:25:a5:bf:e6:db brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.188/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::c225:a5ff:febf:e6db/64 scope link
       valid_lft forever preferred_lft forever
5: bond01:  <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 16:24:f8:cb:e3:20 brd ff:ff:ff:ff:ff:ff
6: bond01.10@bond01: <<BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 16:24:f8:cb:e3:20 brd ff:ff:ff:ff:ff:ff
    inet 10.100.1.3/29 scope global bond01.10
       valid_lft forever preferred_lft forever
7: veth102i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:c6:35:79:d4:fd brd ff:ff:ff:ff:ff:ff link-netnsid 0
8: veth105i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fe:80:07:a6:66:f0 brd ff:ff:ff:ff:ff:ff link-netnsid 1


Bash:
ip route show
default via 10.100.1.1 dev bond01.10 proto kernel onlink 
10.100.1.0/29 dev bond01.10 proto kernel scope link src 10.100.1.3 
192.168.1.0/24 dev vmbr0 proto kernel scope link src 192.168.1.188
 
Can you ping between the hosts via 10.100.1.{2,3}?
Are you using a firewall?

Also, I noticed that you have bonds on both machines but only one NIC assigned to each bond, is that intended? Probably not the cause of your issue, but something I noticed. You could try removing the bond and use the NICs directly, since they currently serve no purpose anyway.
 
Can you ping between the hosts via 10.100.1.{2,3}?
Are you using a firewall?

Also, I noticed that you have bonds on both machines but only one NIC assigned to each bond, is that intended? Probably not the cause of your issue, but something I noticed. You could try removing the bond and use the NICs directly, since they currently serve no purpose anyway.
No I can not ping between nodes.
Yes using a firewall I have set up a sperate vlan for just the HA traffic.

No i did not realise this, because i added this bond-slaves enx9cebe8d5db94 enx9cebe8d5db3e i though that was the correct setup are you saying i need to add the following
Bash:
auto enx9cebe8d5db3e
iface enx9cebe8d5db3e inet manual

auto enx9cebe8d5db94
iface enx9cebe8d5db94 inet manual
to both network config files


I wanted to bond my two 2nd nics for the HA traffic.

hope this makes sense.
 
Yes using a firewall I have set up a sperate vlan for just the HA traffic.
Could you then check whether the firewall is set up correctly? It seems like the firewall is blocking the traffic. Maybe you can turn it off while testing and check if it works then?

No i did not realise this, because i added this bond-slaves enx9cebe8d5db94 enx9cebe8d5db3e i though that was the correct setup are you saying i need to add the following
Yes, if those NICs exist on both nodes then you will need to add the respective sections to your interface config. They should get added automatically on boot though, so please double-check if the names are actually matching. On the ip a output there is only one NIC on each host, so are you positive that BOTH NICs are detected on BOTH hosts?

Because it looks like you have only 2 NICs per host, is that correct?
 
Could you then check whether the firewall is set up correctly? It seems like the firewall is blocking the traffic. Maybe you can turn it off while testing and check if it works then?


Yes, if those NICs exist on both nodes then you will need to add the respective sections to your interface config. They should get added automatically on boot though, so please double-check if the names are actually matching. On the ip a output there is only one NIC on each host, so are you positive that BOTH NICs are detected on BOTH hosts?

Because it looks like you have only 2 NICs per host, is that correct?
To rule out any issues with the VLAN, I have changed the ip for the 2 nics to the same subnet would this be a good way to test?

here is the new network config does this look correct?
Bash:
auto lo
iface lo inet loopback

iface eno2 inet manual

auto enx9cebe8d5db94
iface enx9cebe8d5db94 inet manual

auto enx9cebe8d5db3e
iface enx9cebe8d5db3e inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.187/24
        gateway 192.168.1.1
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094


auto bond01
iface bond01 inet manual
    bond-slaves enx9cebe8d5db94 enx9cebe8d5db3e
    bond-miimon 100
    bond-mode balance-alb

auto bond01.10
iface bond01.10 inet static
    address 192.168.1.189/24
    gateway 192.168.1.1
    vlan-raw-device bond01
 
No, you need to use two different subnets for two different interfaces - otherwise you will run into routing problems.
 
@shanreich thank you for the reply.

Bash:
auto lo
iface lo inet loopback

iface eno2 inet manual

auto enx9cebe8d5db94
iface enx9cebe8d5db94 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.187/24
        gateway 192.168.1.1
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094


auto bond01
iface bond01 inet manual
    bond-slaves enx9cebe8d5db94 enx9cebe8d5db3e
    bond-miimon 100
    bond-mode balance-alb

auto bond01.10
iface bond01.10 inet static
    address 10.100.1.2/29
    gateway 10.100.1.1
    vlan-raw-device bond01
I think your Problem is the second Gateway (10.100.1.1).
Try to remove this Gateway and test again.
 
  • Like
Reactions: shanreich
I think your Problem is the second Gateway (10.100.1.1).
somehow i missed this completely :oops:
although the IPs should not get routed via the gateway anyway - but certainly worth a try.
 
somehow i missed this completely :oops:
although the IPs should not get routed via the gateway anyway - but certainly worth a try.
So this ins my interface on one of my nodes now

what am i missing?

Bash:
auto lo
iface lo inet loopback

iface eno2 inet manual

iface enx9cebe8d5db94 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.187/24
        gateway 192.168.1.1
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094


auto bond01
iface bond01 inet manual
    bond-slaves enx9cebe8d5db94 enx9cebe8d5db3e
    bond-miimon 100
    bond-mode balance-alb

auto bond01.10
iface bond01.10 inet static
    address 192.168.10.2/29
    vlan-raw-device bond01

Migration works when I use the 192.168.1.0/24 subnet.
 
This is the Native VLAN = untagged. With this Settings are your Network on bond01 and not on bond01.10
 
If you assign a VLAN tagged to a port, you must append the VLAN tag to the IP packets. This happens when you create a vmbr0.10, then the packets on this interface are tagged with the VLAN 10.
If you tell the switch that you are assigning VLAN 10 Native, then the switch puts all packets without a VLAN tag into VLAN 10. Packets tagged with VLAN 10 are then discarded, as VLAN 10 is the Native VLAN.
So your vmbr01 interface is in VLAN 10 and if you tag other VLANs, you can then use them via subbridges such as vmbr01.99. Only the vmbr01.10 subbridge does not work, as the VLAN is native.
 
  • Like
Reactions: Joe_:)
If you assign a VLAN tagged to a port, you must append the VLAN tag to the IP packets. This happens when you create a vmbr0.10, then the packets on this interface are tagged with the VLAN 10.
If you tell the switch that you are assigning VLAN 10 Native, then the switch puts all packets without a VLAN tag into VLAN 10. Packets tagged with VLAN 10 are then discarded, as VLAN 10 is the Native VLAN.
So your vmbr01 interface is in VLAN 10 and if you tag other VLANs, you can then use them via subbridges such as vmbr01.99. Only the vmbr01.10 subbridge does not work, as the VLAN is native.
Thank you for clarifying this helped me resolve the issue.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!