[SOLVED] Migrate and SSH broken after upgrade 7 to 8

cglmicro

Member
Oct 12, 2020
98
10
13
51
Hi guys.

I have a cluster with nodes proxmox18s.mydomain.net to proxmox24s.mydomain.net. They all have a public IP, and a second LAN (192.168.150.18 to 192.168.150.24) for the cluster communications.

I migrate every VM out of proxmox24s (192.168.150.24) to other PVE, and I updated this PVE from 7 to 8 since pve7to8 was showing no errors nor warnings.

I tried to migrate back some VM to proxmox24s, but I receive this error:
Code:
ssh: connect to host 15.xxx.xxx.65 port 22: Connection timed out

TASK ERROR: command '/usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=proxmox24s' root@15.xxx.xxx.65 pvecm mtunnel -migration_network 192.168.150.10/24 -get_migration_ip' failed: exit code 255

When I try to ssh from the outside in my proxmox18s.mydomain.net, I also get a timeout.

When I try to ssh from another member of my cluster with "ssh 192.168.150.24", it's working.

During the upgrade when I was asked what to do with sshd_config, I answered NO, and the time stamp of my /etc/ssh/sshd_config is still of April 26th when I did the initial config of this server.

The server has been rebooted also.

What should I do from here?

Thank you for your help.
 
Last edited:
Hi Fiona, thank you for your answer :)

I think the network name enp3s0f0 (wan) and enp3s0f1 (lan) are the same after the update since it show it’s UP:
Code:
root@proxmox24s:/etc/network# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: enp3s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000
link/ether d8:5e:d3:61:cf:40 brd ff:ff:ff:ff:ff:ff

3: enx36961789cfc2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 36:96:17:89:cf:c2 brd ff:ff:ff:ff:ff:ff

4: enp3s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP mode DEFAULT group default qlen 1000
link/ether d8:5e:d3:61:cf:41 brd ff:ff:ff:ff:ff:ff

5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d8:5e:d3:61:cf:40 brd ff:ff:ff:ff:ff:ff

6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether d8:5e:d3:61:cf:41 brd ff:ff:ff:ff:ff:ff

or:
1689873866401.png

And to see the dev:
Code:
root@proxmox24s:~# cat /proc/net/dev
Inter-|   Receive                                                |  Transmit
 face |bytes    packets errs drop fifo frame compressed multicast|bytes    packets errs drop fifo colls carrier compressed
    lo:   33018     374    0    0    0     0          0         0    33018     374    0    0    0     0       0          0
enp3s0f0:  150018    1010    0    0    0     0          0       365   319912     455    0    0    0     0       0          0
enp3s0f1: 6874895   24477    0    0    0     0          0       109  6372248   23916    0    0    0     0       0          0
 vmbr0:  134750    1009    0    0    0     0          0       368   309088     291    0    0    0     0       0          0
 vmbr1: 6532175   24477    0    0    0     0          0        87  6372248   23916    0    0    0     0       0          0

When I try to ping 8.8.8.8 from this node:
Code:
root@proxmox24s:/etc/network# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
From 192.168.150.24 icmp_seq=1 Destination Host Unreachable

Here is a capture of my GUI:
1689873632362.png


And here is the content of my /etc/network/interfaces file:
Code:
root@proxmox24s:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enx36961789cfc2 inet manual

iface enp3s0f0 inet manual

#auto enp3s0f1
iface enp3s0f1 inet manual

auto vmbr0
iface vmbr0 inet dhcp
        bridge-ports enp3s0f0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address 192.168.150.24/24
        gateway 15.235.10.254
        bridge-ports enp3s0f1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#vRACK-LAN

Any suggestions?

Thank you.
 
Code:
auto vmbr1
iface vmbr1 inet static
        address 192.168.150.24/24
        gateway 15.235.10.254
        bridge-ports enp3s0f1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#vRACK-LAN
The gateway is not in the same subnet as specified by the address/CIDR. Do you have special routing rules in place? Otherwise I'm not sure this will work.
 
Hi Fiona, thanks again for your reply.

Yes it's normal, all my other PVE are working with similar configuration. This same server was wotking for months with this config before the 7 to 8 upgrade.

My servers are dedicated servers from OVH. If a server has a public IP of 15.235.10.65, the gateway need to be 15.235.10.254. For 15.235.12.x it's 15.235.12.254, etc.

Should I wipe it and start over from scratch, or do you have another suggestion before I do?

Thank you.
 
... and if it help, is this something helpfull?
Code:
root@proxmox24s:/var/log/ifupdown2/network_config_ifupdown2_43_Jul-21-2023_09:32:48.101228# cat ifupdown2.debug.log
2023-07-21 09:32:48,101: MainThread: ifupdown2: log.py:196:__init_debug_logging(): debug: persistent debugging is initialized
2023-07-21 09:32:48,114: MainThread: ifupdown2: main.py:229:run_query(): debug: args = Namespace(all=False, iflist=[], verbose=False, debug=False, quiet=False, CLASS=['hotplug'], withdepends=False, perfmode=False, nocache=False, excludepats=None, interfacesfile=None, interfacesfileformat='native', type=None, list=True, running=False, checkcurr=False, raw=False, printsavedstate=False, format='native', printdependency=None, syntaxhelp=False, withdefaults=False, version=None, nldebug=False)
2023-07-21 09:32:48,114: MainThread: ifupdown2: main.py:253:run_query(): debug: creating ifupdown object ..
2023-07-21 09:32:48,114: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2423:get_all_links_wait_netlinkq(): info: requesting link dump
2023-07-21 09:32:48,117: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2435:get_all_addresses_wait_netlinkq(): info: requesting address dump
2023-07-21 09:32:48,117: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2445:get_all_netconf_wait_netlinkq(): info: requesting netconf dump
2023-07-21 09:32:48,118: MainThread: ifupdown2.NetlinkListenerWithCache: nlcache.py:2246:reset_errorq(): debug: nlcache: reset errorq
2023-07-21 09:32:48,118: MainThread: ifupdown: ifupdownmain.py:329:__init__(): debug: {'enable_persistent_debug_logging': 'yes', 'use_daemon': 'no', 'template_enable': '1', 'template_engine': 'mako', 'template_lookuppath': '/etc/network/ifupdown2/templates', 'default_interfaces_configfile': '/etc/network/interfaces', 'disable_cli_interfacesfile': '0', 'addon_syntax_check': '0', 'addon_scripts_support': '1', 'addon_python_modules_support': '1', 'multiple_vlan_aware_bridge_support': '1', 'ifquery_check_success_str': 'pass', 'ifquery_check_error_str': 'fail', 'ifquery_check_unknown_str': '', 'ifquery_ifacename_expand_range': '0', 'link_master_slave': '0', 'delay_admin_state_change': '0', 'ifreload_down_changed': '0', 'addr_config_squash': '0', 'ifaceobj_squash': '0', 'adjust_logical_dev_mtu': '1', 'state_dir': '/run/network/'}
2023-07-21 09:32:48,118: MainThread: ifupdown: ifupdownmain.py:1424:load_addon_modules(): info: loading builtin modules from ['/usr/share/ifupdown2/addons']
2023-07-21 09:32:48,120: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module openvswitch not loaded (module init failed: no /usr/bin/ovs-vsctl found)
2023-07-21 09:32:48,120: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module openvswitch_port not loaded (module init failed: no /usr/bin/ovs-vsctl found)
2023-07-21 09:32:48,123: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module ppp not loaded (module init failed: no /usr/bin/pon found)
2023-07-21 09:32:48,124: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module batman_adv not loaded (module init failed: no /usr/sbin/batctl found)
2023-07-21 09:32:48,127: MainThread: ifupdown.bridge: bridge.py:644:__init__(): debug: bridge: using reserved vlan range (0, 0)
2023-07-21 09:32:48,127: MainThread: ifupdown.bridge: bridge.py:664:__init__(): debug: bridge: init: warn_on_untagged_bridge_absence=False
2023-07-21 09:32:48,127: MainThread: ifupdown.bridge: bridge.py:671:__init__(): debug: bridge: init: vxlan_bridge_default_igmp_snooping=None
2023-07-21 09:32:48,127: MainThread: ifupdown.bridge: bridge.py:680:__init__(): debug: bridge: init: arp_nd_suppress_only_on_vxlan=False
2023-07-21 09:32:48,127: MainThread: ifupdown.bridge: bridge.py:686:__init__(): debug: bridge: init: bridge_always_up_dummy_brport=None
2023-07-21 09:32:48,127: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /sbin/sysctl net.bridge.bridge-allow-multiple-vlans
2023-07-21 09:32:48,127: MainThread: ifupdown.bridge: bridge.py:696:__init__(): debug: bridge: init: multiple vlans allowed True
2023-07-21 09:32:48,129: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module mstpctl not loaded (module init failed: no /sbin/mstpctl found)
2023-07-21 09:32:48,131: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /bin/ip rule show
2023-07-21 09:32:48,131: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /bin/ip -6 rule show
2023-07-21 09:32:48,194: MainThread: ifupdown.address: address.py:298:__policy_get_default_mtu(): info: address: using default mtu 1500
2023-07-21 09:32:48,194: MainThread: ifupdown.address: address.py:312:__policy_get_max_mtu(): info: address: max_mtu undefined
2023-07-21 09:32:48,194: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /sbin/sysctl net.ipv6.conf.all.accept_ra
2023-07-21 09:32:48,195: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /sbin/sysctl net.ipv6.conf.all.autoconf
2023-07-21 09:32:48,196: MainThread: ifupdown: utils.py:301:_log_command_exec(): info: executing /usr/sbin/ip vrf id
2023-07-21 09:32:48,197: MainThread: ifupdown.dhcp: dhcp.py:55:__init__(): info: mgmt vrf_context = False
2023-07-21 09:32:48,197: MainThread: ifupdown.dhcp: dhcp.py:70:__init__(): debug: dhclient: dhclient_retry_on_failure set to 0
2023-07-21 09:32:48,197: MainThread: ifupdown.addressvirtual: addressvirtual.py:99:__init__(): info: executing /bin/ip addr help
2023-07-21 09:32:48,198: MainThread: ifupdown.addressvirtual: addressvirtual.py:104:__init__(): info: address metric support: OK
2023-07-21 09:32:48,199: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module ppp not loaded (module init failed: no /usr/bin/pon found)
2023-07-21 09:32:48,199: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module mstpctl not loaded (module init failed: no /sbin/mstpctl found)
2023-07-21 09:32:48,199: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module batman_adv not loaded (module init failed: no /usr/sbin/batctl found)
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module openvswitch_port not loaded (module init failed: no /usr/bin/ovs-vsctl found)
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:1449:load_addon_modules(): info: module openvswitch not loaded (module init failed: no /usr/bin/ovs-vsctl found)
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:1536:load_scripts(): info: looking for user scripts under /etc/network
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-pre-up.d ...
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-up.d ...
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-post-up.d ...
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-pre-down.d ...
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-down.d ...
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:1539:load_scripts(): info: loading scripts under /etc/network/if-post-down.d ...
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:396:__init__(): info: using mgmt iface default prefix eth
2023-07-21 09:32:48,200: MainThread: ifupdown: ifupdownmain.py:2004:query(): debug: setting flag ALL
2023-07-21 09:32:48,201: MainThread: ifupdown.networkInterfaces: networkinterfaces.py:506:read_file(): info: processing interfaces file /etc/network/interfaces
2023-07-21 09:32:48,201: MainThread: ifupdown2: main.py:85:main(): error: main exception: no ifaces found matching given allow lists
2023-07-21 09:32:48,218: MainThread: ifupdown2: log.py:373:write(): info: exit status 1

I see "error: main exception: no ifaces found matching given allow lists" at the end.

Thanks.
 
... and also if it help, here is my route for the DEFECTIVE server:
Code:
root@proxmox24s:~# ip route show
default via 15.235.10.254 dev vmbr1 proto kernel onlink
15.235.10.0/24 dev vmbr0 proto kernel scope link src 15.235.10.65
192.168.150.0/24 dev vmbr1 proto kernel scope link src 192.168.150.24

compared to the route of another identical WORKING server:
Code:
root@proxmox23s:~# ip route show
default via 15.235.114.254 dev vmbr0
15.235.114.0/24 dev vmbr0 proto kernel scope link src 15.235.114.15
192.168.150.0/24 dev vmbr1 proto kernel scope link src 192.168.150.23

If I do:
Code:
root@proxmox24s:~# ip route del default via 15.235.10.254 dev vmbr1
root@proxmox24s:~# ip route add default via 15.235.10.254 dev vmbr0
root@proxmox24s:~# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=109 time=1.51 ms
(yeah !!)

How can I change my route in my interfaces file so it survive a reboot?

Thank you.
 
Last edited:
... and also if it help, here is my route for the DEFECTIVE server:
there is clearly a difference between the two. In good one vmbr0 is on 15.x subnet, and default gw is 15.x. On bad one vmbr1 is on 192. subnet with gw 15.x which is not going to work. As @fiona pointed out yesterday.
Remove gw line from vmbr1 in your network config. The gw is likely setup by dhcp from your provider.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi bbgeek17.

I just tried and it worked !!
I never tried to remove it, since all my other servers required it in the past. I wonder why my other servers needs that GW in the block vmbr0, but as long as it work, I'll be able to upgrade and fix my other servers.

Thank you to both of you :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!