Direct link between 2 cluster nodes

wyx087

New Member
Sep 10, 2024
2
2
1
Hello, complete networking newbie here. Barely managed to figured how how to restore network connection to a Proxmox box due to adding a PCIe device. I'd greatly appreciate a little pointer.


I'm having trouble figuring out how to directly link 2 nodes so that ZFS replication and VM migration could happen faster (?) and create less traffic congestion on through the usual link for other containers/VM.

I'm guessing I should be looking at this:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server

But which setup should I use?

Currently I have 1 machine with following /etc/network/interfaces:
Code:
auto lo
iface lo inet loopback

iface enp5s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.5.2/24
        gateway 192.168.5.1
        bridge-ports enp4s0
        bridge-stp off
        bridge-fd 0

iface enp4s0 inet manual

source /etc/network/interfaces.d/*

Second machine is a lower power/slower machine but also has 2 NIC. So I'm looking to utilise both NIC in both machines somehow.

Alternatively, what is the best use for the 2 NIC on those machines? The 5-port switch is a bog standard cheap TL-SG105. What bonding options do I have to help speed up VM migration?

(yes, for High Availability, I have set up a QDevice on a NAS)

Thanks very much
 
I think this is what I wanted:
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_migration_network

With direct cable connecting the two. just set up similar network IP in the network interfaces. Then set datacenter.cfg as per above link.

Now HA migration and disk replication all go through this direct link.
Did you do something else?
Because I'm getting an ssh error with my setup.

ssh: connect to host 10.10.10.2 port 22: No route to host
ERROR: migration aborted (duration 00:00:03): Can't connect to destination address using public key
 
ssh: connect to host 10.10.10.2 port 22: No route to host

You gave us zero information about your setup. The more details you post here, the greater the chance for a helpful answer. Please start by giving us some information, like the copy-n-pasted output of some commands:

PVE System information:
  • pveversion -v
  • ss -tlpn # which process is listening on which address/tcp-port
Basic network information:
  • ip address show # currently active IP addresses on one NODE
  • ip route show # currently active routing table on one NODE
  • ip link show # currently active links on one NODE
  • cat /etc/network/interfaces # configuration of the network
Please do the above for both systems.

Those are examples. You may add/edit commands and options if you can enrich the information given. Oh, and please put each command in a separate [CODE]...[/CODE]-block for better readability.
 
  • Like
Reactions: DerMuuux
Oh, yeah, you're right, sorry.

These are the two systems:
Code:
$pveversion -v
proxmox-ve: 8.4.0 (running kernel: 6.8.12-10-pve)
pve-manager: 8.4.1 (running version: 8.4.1/2a5fa54a8503f96d)
proxmox-kernel-helper: 8.1.1
proxmox-kernel-6.8.12-10-pve-signed: 6.8.12-10
proxmox-kernel-6.8: 6.8.12-10
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
ceph-fuse: 16.2.15+ds-0+deb12u1
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.0
libpve-cluster-perl: 8.1.0
libpve-common-perl: 8.3.1
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.6
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.4.1-1
proxmox-backup-file-restore: 3.4.1-1
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.1
proxmox-mail-forward: 0.3.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.10
pve-cluster: 8.1.0
pve-container: 5.2.6
pve-docs: 8.4.0
pve-edk2-firmware: 4.2025.02-3
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.1
pve-firmware: 3.15-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.2
pve-qemu-kvm: 9.2.0-5
pve-xtermjs: 5.5.0-2
qemu-server: 8.3.12
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve2
Code:
$ss -tlpn 
State   Recv-Q  Send-Q   Local Address:Port   Peer Address:Port  Process                                                                                                                                     
LISTEN  0       100          127.0.0.1:25          0.0.0.0:*      users:(("master",pid=3153,fd=13))                                                                                                         
LISTEN  0       4096         127.0.0.1:85          0.0.0.0:*      users:(("pvedaemon worke",pid=3312,fd=6),("pvedaemon worke",pid=3311,fd=6),("pvedaemon worke",pid=3310,fd=6),("pvedaemon",pid=3309,fd=6)) 
LISTEN  0       128            0.0.0.0:22          0.0.0.0:*      users:(("sshd",pid=2989,fd=3))                                                                                                             
LISTEN  0       4096           0.0.0.0:111         0.0.0.0:*      users:(("rpcbind",pid=2577,fd=4),("systemd",pid=1,fd=36))                                                                                 
LISTEN  0       128               [::]:22             [::]:*      users:(("sshd",pid=2989,fd=4))                                                                                                             
LISTEN  0       100              [::1]:25             [::]:*      users:(("master",pid=3153,fd=14))                                                                                                         
LISTEN  0       4096              [::]:111            [::]:*      users:(("rpcbind",pid=2577,fd=6),("systemd",pid=1,fd=38))                                                                                 
LISTEN  0       4096                 *:3128              *:*      users:(("spiceproxy work",pid=3329,fd=6),("spiceproxy",pid=3328,fd=6))                                                                     
LISTEN  0       4096                 *:8006              *:*      users:(("pveproxy worker",pid=555431,fd=6),("pveproxy worker",pid=555430,fd=6),("pveproxy worker",pid=555223,fd=6),("pveproxy worker",pid=555222,fd=6),("pveproxy worker",pid=554900,fd=6),("pveproxy worker",pid=554899,fd=6),("pveproxy",pid=3322,fd=6))
Code:
$ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether b4:2e:99:02:73:52 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b4:2e:99:02:73:52 brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.211/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::b62e:99ff:fe02:7352/64 scope link
       valid_lft forever preferred_lft forever
4: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 4e:0a:60:de:c3:d1 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.2/24 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::a4ed:b1ff:fe6b:9571/64 scope link
       valid_lft forever preferred_lft forever
13: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i0 state UNKNOWN group default qlen 1000
    link/ether 02:a0:27:e8:13:f1 brd ff:ff:ff:ff:ff:ff
14: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7e:e8:d5:df:49:6f brd ff:ff:ff:ff:ff:ff
[...]
85: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP group default qlen 1000
    link/ether 7a:72:89:56:e4:97 brd ff:ff:ff:ff:ff:ff
Code:
$ip route show 
default via 192.168.121.1 dev vmbr0 proto kernel onlink
10.10.10.0/24 dev vmbr1 proto kernel scope link src 10.10.10.2
192.168.121.0/24 dev vmbr0 proto kernel scope link src 192.168.121.211
Code:
$ip link show   
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp8s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether b4:2e:99:02:73:52 brd ff:ff:ff:ff:ff:ff
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether b4:2e:99:02:73:52 brd ff:ff:ff:ff:ff:ff
4: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 4e:0a:60:de:c3:d1 brd ff:ff:ff:ff:ff:ff
13: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr101i0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 02:a0:27:e8:13:f1 brd ff:ff:ff:ff:ff:ff
14: fwbr101i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 7e:e8:d5:df:49:6f brd ff:ff:ff:ff:ff:ff
[...]
85: fwln100i0@fwpr100p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr100i0 state UP mode DEFAULT group default qlen 1000
    link/ether 7a:72:89:56:e4:97 brd ff:ff:ff:ff:ff:ff
Code:
$cat /etc/network/interfaces   
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp8s0 inet manual
#regular 1Gbit Port

auto enx6c6e070ab5c5
iface enx6c6e070ab5c5 inet manual
#5 Gbit Adapter

auto vmbr0
iface vmbr0 inet static
        address 192.168.121.211/24
        gateway 192.168.121.1
        bridge-ports enp8s0
        bridge-stp off
        bridge-fd 0
        hwaddress b4:2e:99:02:73:52

auto vmbr1
iface vmbr1 inet static
        address 10.10.10.2/24
        bridge-ports enx6c6e070ab5c5
        bridge-stp off
        bridge-fd 0
 
  • Like
Reactions: UdoB
(had to split the message in two because of the character limit. Also I cut out some entries from ip address show etc -> [...])
Code:
$pveversion -v
proxmox-ve: 8.4.0 (running kernel: 6.14.0-2-pve)
pve-manager: 8.4.1 (running version: 8.4.1/2a5fa54a8503f96d)
proxmox-kernel-helper: 8.1.1
proxmox-kernel-6.14.0-2-pve-signed: 6.14.0-2
proxmox-kernel-6.14: 6.14.0-2
proxmox-kernel-6.8.12-10-pve-signed: 6.8.12-10
proxmox-kernel-6.8: 6.8.12-10
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
amd64-microcode: 3.20250311.1
ceph-fuse: 17.2.8-pve2
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u1
frr-pythontools: 10.2.2-1+pve1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.1
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.0
libpve-cluster-perl: 8.1.0
libpve-common-perl: 8.3.1
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.6
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.4.1-1
proxmox-backup-file-restore: 3.4.1-1
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.1
proxmox-mail-forward: 0.3.2
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.10
pve-cluster: 8.1.0
pve-container: 5.2.6
pve-docs: 8.4.0
pve-edk2-firmware: 4.2025.02-3
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.1
pve-firmware: 3.15-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.2
pve-qemu-kvm: 9.2.0-5
pve-xtermjs: 5.5.0-2
qemu-server: 8.3.12
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.7-pve2
Code:
$ss -tlpn 
State  Recv-Q Send-Q Local Address:Port   Peer Address:Port Process                                                                                                                                         
LISTEN 0      100        127.0.0.1:25          0.0.0.0:*     users:(("master",pid=1651,fd=13))                                                                                                               
LISTEN 0      4096       127.0.0.1:85          0.0.0.0:*     users:(("pvedaemon worke",pid=896236,fd=6),("pvedaemon worke",pid=833316,fd=6),("pvedaemon worke",pid=103483,fd=6),("pvedaemon",pid=1714,fd=6))
LISTEN 0      4096         0.0.0.0:111         0.0.0.0:*     users:(("rpcbind",pid=1267,fd=4),("systemd",pid=1,fd=36))                                                                                       
LISTEN 0      128          0.0.0.0:22          0.0.0.0:*     users:(("sshd",pid=1473,fd=3))                                                                                                                 
LISTEN 0      100            [::1]:25             [::]:*     users:(("master",pid=1651,fd=14))                                                                                                               
LISTEN 0      4096               *:3128              *:*     users:(("spiceproxy work",pid=1737,fd=6),("spiceproxy",pid=1736,fd=6))                                                                         
LISTEN 0      4096            [::]:111            [::]:*     users:(("rpcbind",pid=1267,fd=6),("systemd",pid=1,fd=38))                                                                                       
LISTEN 0      128             [::]:22             [::]:*     users:(("sshd",pid=1473,fd=4))                                                                                                                 
LISTEN 0      4096               *:8006              *:*     users:(("pveproxy worker",pid=156638,fd=6),("pveproxy worker",pid=156268,fd=6),("pveproxy worker",pid=152032,fd=6),("pveproxy",pid=1731,fd=6))
Code:
$ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
    link/ether 58:47:ca:7b:dc:bf brd ff:ff:ff:ff:ff:ff
3: enx6c6e070ab3d0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UP group default qlen 1000
    link/ether 6c:6e:07:0a:b3:d0 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 58:47:ca:7b:dc:bf brd ff:ff:ff:ff:ff:ff
    inet 192.168.121.213/24 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::5a47:caff:fe7b:dcbf/64 scope link
       valid_lft forever preferred_lft forever
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 6c:6e:07:0a:b3:d0 brd ff:ff:ff:ff:ff:ff
    inet 10.10.10.3/24 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::6e6e:7ff:fe0a:b3d0/64 scope link
       valid_lft forever preferred_lft forever
6: veth210i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr210i0 state UP group default qlen 1000
    link/ether fe:c0:06:31:ff:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: fwbr210i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f2:5b:ed:6f:f8:87 brd ff:ff:ff:ff:ff:ff
[...]
33: fwln208i0@fwpr208p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr208i0 state UP group default qlen 1000
    link/ether 3e:1f:98:bc:6b:9b brd ff:ff:ff:ff:ff:ff
Code:
$ip route show 
default via 192.168.121.1 dev vmbr0 proto kernel onlink
10.10.10.0/24 dev vmbr1 proto kernel scope link src 10.10.10.3
192.168.121.0/24 dev vmbr0 proto kernel scope link src 192.168.121.213
Code:
$ip link show   
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp9s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 58:47:ca:7b:dc:bf brd ff:ff:ff:ff:ff:ff
3: enx6c6e070ab3d0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UP mode DEFAULT group default qlen 1000
    link/ether 6c:6e:07:0a:b3:d0 brd ff:ff:ff:ff:ff:ff
4: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 58:47:ca:7b:dc:bf brd ff:ff:ff:ff:ff:ff
5: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 6c:6e:07:0a:b3:d0 brd ff:ff:ff:ff:ff:ff
6: veth210i0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr210i0 state UP mode DEFAULT group default qlen 1000
    link/ether fe:c0:06:31:ff:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0
7: fwbr210i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether f2:5b:ed:6f:f8:87 brd ff:ff:ff:ff:ff:ff
[...]
33: fwln208i0@fwpr208p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr208i0 state UP mode DEFAULT group default qlen 1000
    link/ether 3e:1f:98:bc:6b:9b brd ff:ff:ff:ff:ff:ff
Code:
$cat /etc/network/interfaces   
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp9s0 inet manual
#2.5G Port

iface enx6c6e070ab3d0 inet manual
#5G Adapter

auto vmbr0
iface vmbr0 inet static
        address 192.168.121.213/24
        gateway 192.168.121.1
        bridge-ports enp9s0
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet static
        address 10.10.10.3/24
        bridge-ports enx6c6e070ab3d0
        bridge-stp off
        bridge-fd 0

source /etc/network/interfaces.d/*
 
I noticed that the network adapter (enx6c6e070ab5c5) on system 1 is deactivated for some reason I can't figure out.

Btw these apapters are DeLOCK USB type-A/C 3.1 adapters to 5 Gigabit LAN, RJ-45 (Type C, Type A) with a Realtek RTL8157 chip.

The two systems are both plugged into an unmanagged swith and have a direct connection via these adapters.
 
Well, I do not see an obvious fatal error anywhere.

System 1 - Interfaces:
auto enx6c6e070ab5c5
iface enx6c6e070ab5c5 inet manual
I would remove that "auto" line. (And "ifreload -a" afterwards.)

System 2 - "ip link show" lacks output for enx6c6e070ab3d0, but it is there, right?
 
  • Like
Reactions: Johannes S
First off, thank you for your time!

System 1 - Interfaces:

I would remove that "auto" line. (And "ifreload -a" afterwards.)
done

System 2 - "ip link show" lacks output for enx6c6e070ab3d0, but it is there, right?
I guess you mean system 1, cause that shows no entry in the "ip link show" command for the enx interface.
And no, there is no output, but this device is plugged in.

edit:
if I do a "ip link show | grep enx" only system 2 shows an output
Code:
system 1$ ip link show | grep enx

system 2$ ip link show | grep enx
3: enx6c6e070ab3d0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UP mode DEFAULT group default qlen 1000
 
Last edited: