Direct connection between nodes missing network device

pamoman

New Member
Nov 27, 2022
4
0
1
My goal is to connect 2 nodes directly using a sfp+ 10GbE DAC cable for zfs replication. Both nodes have the same network card which has 4 ports in total, 2 x 10GbE sfp+ ports (ports eno1 & eno2) and 2 x 1GbE ports (ports eno3 & eno4 ). Ports eno1 and eno3 are connected to the switch and port eno2 is directly connected to the other node. The problem is that port eno2 doesn't even show up in Proxmox as a network device on both nodes.

How do I go about this? I tried manually adding eno2 to the interfaces file and created a bridge on both nodes with the IP addresses 172.23.23.1/30 and 172.23.23.2/30 but pinging doesn't work. Not sure why this is the case, below is my /etc/network/interfaces file before any attempts were tested.

Screenshot 2022-11-27 at 21.32.26.png

auto lo
iface lo inet loopback

iface eno3 inet manual

iface eno4 inet manual

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet static
address 172.22.22.10/28
gateway 172.22.22.1
bridge-ports eno3
bridge-stp off
bridge-fd 0

iface vmbr1 inet manual
bridge-ports eno1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
#Datacenter
 
Last edited:
Hi,

what you see in the GUI is just the parsed /etc/network/interfaces. Calling ip link on the command line should show you all network ports available. You can post the result here.

How did you try to add the eno2 to your interfaces config?
 
This is what I get:

Bash:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:05:90:74 brd ff:ff:ff:ff:ff:ff
    altname enp9s0f0
3: eno4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:05:90:75 brd ff:ff:ff:ff:ff:ff
    altname enp9s0f1
4: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:05:90:70 brd ff:ff:ff:ff:ff:ff
    altname enp1s0f0
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:05:90:74 brd ff:ff:ff:ff:ff:ff
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 24:6e:96:05:90:70 brd ff:ff:ff:ff:ff:ff

What file do I need to edit to setup the ip adress and subnet manually? I tried adding the config for eno2 directly inside /etc/network/interfaces but got an error when applying the changes:

Bash:
auto lo
iface lo inet loopback

iface eno3 inet manual

iface eno4 inet manual

iface eno1 inet manual

iface eno2 inet manual

auto vmbr0
iface vmbr0 inet static
        address 172.22.22.10/28
        gateway 172.22.22.1
        bridge-ports eno3
        bridge-stp off
        bridge-fd 0

auto vmbr1
iface vmbr1 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#Datacenter

auto vmbr2
iface vmbr2 inet static
        address 172.23.23.1/30
        bridge-ports eno2
        bridge-stp off
        bridge-fd 0
#Replication
 
Last edited:
Just checked out the bios on the servers, Dell PowerEdge R730XD, noticed that when the SFP+ cable is connected the port disappears and is not visible. So only ports 1, 3 and 4 are available. Disconnecting the cable makes the port 2 appear again, so somethings not right here. Using the exact same sfp+ cable as the other port connected directly to the switch and all is working there. Is it possible to directly connects SFP+ ports between 2 servers?
 
Last edited:
Apparently Linux only sees 3 ports. It should see them without cables plugged in as well, so there might be something wrong with your card. Adding it to /etc/network/interfaces confg won't really help if "ip link" doesn't show you the port.
 
  • Like
Reactions: Stoiko Ivanov
ust checked out the bios on the servers, Dell PowerEdge R730XD, noticed that when the SFP+ cable is connected the port disappears
on a hunch - last time I ran into such an issue (disappearing sfp+ ports) was with Intel NICs and incompatible SFP+ modules/cables.
take a close look at `dmesg` it should say something about this
 
Yeah, I get the feeling that it's the sfp+ cable that's the problem, not on Intel's whitelist presumably. I will try again with another cable tonight to see if it makes a difference.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!