Multiple NICs

shaffin

New Member
Oct 31, 2023
7
0
1
Hello Forum,

I am new to Proxmox and this is my first post so please pardon my ignorance or knowledge. I just installed PVE8 yesterday on my AMD EPYC server with 2x1TB drives in ZRAID1 configuration for the OS, and also a 2x4TB in ZRAID1 configuration as well for the data drive. My server has a 2 port 10GB nic that I want to dedicate:

- 1 port for the corp (192.168.55.0/24) network (gw 192.168.55.1)
- 1 port for the lab (192.168.66.0/24) network.

The corp nic port has the default gateway 192.168.55.1 while the other nic port doesn't.

When deploying the above network configuration PVE8 keeps on dropping the network every minute it seems, so when I am in a ssh session, the connection will drop every minute and I have to reconnect again. As soon as I remove the lab network configuration from PVE8 with only the corp nic port then I dont see this behavior. Am I missing something or doing something wrong in my configuration?

Thanks,
Shaffin.
 
Last edited:
Hey,

could you please post the content of /etc/network/interfaces and explain the network around it a bit?

Also, just so it has been said, you should never ever save important data in a raid0. If any of the disks in a raid 0 fail, all data will be lost.
The extra storage room is definitely not worth the headaches of explaining why the company database is gone.
 
Sorry, I meant to type ZRAID1 (mirror) and not ZRAID0 :-(

Here is my /etc/network/interfaces


Bash:
auto lo
iface lo inet loopback


iface enp6s0f4d1 inet manual


iface eno1 inet manual


iface eno2 inet manual


iface enp6s0f4 inet manual


auto vmbr0
iface vmbr0 inet static
        address 192.168.55.50/24
        gateway 192.168.55.1
        bridge-ports enp6s0f4d1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#The corp network 192.168.55.0/24


auto vmbr1
iface vmbr1 inet static
        address 192.168.66.0/24
        bridge-ports enp6s0f4
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-4094
#The lab network 192.168.66.0/24

Also of note that I experience this behavior after I add a container and attach it to vmbr1.

Thanks,
Shaffin.
 
Last edited:
One thing that immediately jumps out to me is that on vmbr1 you are using the network address as your nic's address.
It should be something like 192.168.66.5/24 but not 0. I'm not sure what exactly happens if you do, but maybe your problem is the result of it ^^
 
One thing that immediately jumps out to me is that on vmbr1 you are using the network address as your nic's address.
It should be something like 192.168.66.5/24 but not 0. I'm not sure what exactly happens if you do, but maybe your problem is the result of it ^^
Thank you for a second pair of eyes :-) I corrected the ip address to a static ip but I still encounter the problem when attaching the container to vmbr1 :-(
 
Anyone else facing this issue or is it just something with my hardware? This is a brand new install with the latest PVE8.0.4???
 
Could you provide the journal of a time the network connection was lost?
journalctl --since "2023-11-19" > journal.txt
Replace the date with any where you saw the issue. Could even be today if you've experienced it today.

In addition to the journal, can you provide the output of the following commands?
Code:
ip -details -statistics address > ip_address.txt
lspci -nnk > lspci.txt

Please attach all 3 files here.
 
Last edited:
Could you provide the journal of a time the network connection was lost?
journalctl --since "2023-11-19" > journal.txt
Replace the date with any where you saw the issue. Could even be today if you've experienced it today.

In addition to the journal, can you provide the output of the following commands?
Code:
ip -details -statistics address > ip_address.txt
lspci -nnk > lspci.txt

Please attach all 3 files here.

Hello Mira,

Thank you for your help. You will find the files attached!

Thanks,
Sam.
 

Attachments

Hello Mira,

I am attaching another journal log that shows me creating vmbr1. After which you will notice that my ssh session that I establish keeps on closing and I login again. This happens consistently!

Thanks,
Shaffin.
 

Attachments

Sadly there's no mention of any network issues.
Even the automatic `apt update` seems to work just fine.
Nov 21 04:12:04 pve-node1 systemd[1]: Starting pve-daily-update.service - Daily PVE download activities...
Nov 21 04:12:06 pve-node1 pveupdate[2222352]: <root@pam> starting task UPID:pve-node1:0021E915:027C3285:655C7466:aptupdate::root@pam:
Nov 21 04:12:09 pve-node1 pveupdate[2222357]: update new package list: /var/lib/pve-manager/pkgupdates
Nov 21 04:12:11 pve-node1 pveupdate[2222352]: <root@pam> end task UPID:pve-node1:0021E915:027C3285:655C7466:aptupdate::root@pam: OK
Nov 21 04:12:11 pve-node1 systemd[1]: pve-daily-update.service: Deactivated successfully.
Nov 21 04:12:11 pve-node1 systemd[1]: Finished pve-daily-update.service - Daily PVE download activities.
Nov 21 04:12:11 pve-node1 systemd[1]: pve-daily-update.service: Consumed 5.255s CPU time.

The issues start when you add the 2nd bridge (vmbr1)?

One thing to note, it seems all available interfaces are part of the same function:
Code:
4: enp6s0f4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
5: enp6s0f4d1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:07:43:3b:36:28 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 81 maxmtu 9600 numtxqueues 8224 numrxqueues 8224 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:06:00.4
6: enp6s0f4d2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 00:07:43:3b:36:30 brd ff:ff:ff:ff:ff:ff promiscuity 0  allmulti 0 minmtu 81 maxmtu 9600 numtxqueues 8224 numrxqueues 8224 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:06:00.4
7: enp6s0f4d3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 00:07:43:3b:36:38 brd ff:ff:ff:ff:ff:ff promiscuity 1  allmulti 1 minmtu 81 maxmtu 9600
    bridge_slave state forwarding priority 32 cost 100 hairpin off guard off root_block off fastleave off learning on flood on port_id 0x8001 port_no 0x1 designated_port 32769 designated_cost 0 designated_bridge 8000.0:7:43:3b:36:38 designated_root 8000.0:7:43:3b:36:38 hold_timer    0.00 message_age_timer    0.00 forward_delay_timer    0.00 topology_change_ack 0 config_pending 0 proxy_arp off proxy_arp_wifi off mcast_router 1 mcast_fast_leave off mcast_flood on bcast_flood on mcast_to_unicast off neigh_suppress off group_fwd_mask 0 group_fwd_mask_str 0x0 vlan_tunnel off isolated off locked off numtxqueues 8224 numrxqueues 8224 gso_max_size 65536 gso_max_segs 65535 tso_max_size 65536 tso_max_segs 65535 gro_max_size 65536 parentbus pci parentdev 0000:06:00.4
All interfaces ending in dX seem to have the same parentdev: 0000:06:00.4
This device is one of the 4 functions of the NIC according to the lspci output:
Code:
06:00.4 Ethernet controller [0200]: Chelsio Communications Inc T540-CR Unified Wire Ethernet Controller [1425:5403]
    Subsystem: Chelsio Communications Inc T540-CR Unified Wire Ethernet Controller [1425:0000]
    Kernel driver in use: cxgb4
    Kernel modules: cxgb4
Is it possible to use any of the other functions instead?
Or could you use either eno1 or eno2 for vmbr1 and see how it behaves?
 
Hello Mira,

Yes, I only come across this issue when I add a 2nd bridge vmbr1.

I did what you requested and created vmbr1 against eno2 and I am against the same problem. I have attached the journal for you.

Thanks,
Shaffin.
 

Attachments

Thank you for the journal!
Still nothing of note in there.

You changed the bridge port to eno2 and kept the IP the same?
Could it be there's another host in your network using the same IP?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!