10Gbe car/ports needs configuration?

ieronymous

Active Member
Apr 1, 2019
251
18
38
44
Hi

Noticed today that by default after installation of prox, if you go to node->Network you can see all port's active status as Yes (nothing unusual here). Since though the network card of the server has 4 ports (2 of which are 1gbe and the other 2 are 10gbe), the 1gbe ports have active status Yes while the 10gbe ports as No. I was trying to figure out why this happens. Checked Server's bios and for that specific port of the onboard nic intel 10G 4P X520/I350 rNDC options were

Legacy boot protocol (set as none, was PXE)
NONE
PXE
iSCSI Primary
iSCSI Secondary
FCoE

TCP/IP Parameters via DHCP-> set to enabled (was disabled)
iSCSI Parameters via DHCP->set to enabled (was disabled)

Virtualization Mode Set to SR-ION (was none)
SR-ION
None

Even though I know what these parameters do and have nothing to do with why proxmox see the port as not active, I was trying stuff from hardware perspective first.
At the same time though I went to /etc/network/interfaces and noticed that each port had
auto <name_of_the_port>
iface <name_of_the_port> inet manual

That was not true for the 10gbe ports since they had only the iface <name_of_the_port> inet manual
When I set the auto <name_of_the_port> and restarted the network service with ifreload -a and refreshed network gui section again the 10gbe ports were set to Active state. At the same time though I had created a vmbr bridge calling that port so is there a rule on proxmox setting a port active only if it being used by a vmbr or bond?

Does proxmox checks something in the hardware section of the machine in order to determine if it is going to set a port as active or not?
Does proxmox need a specific configuration for 10gbe ports?
Currently I created a vmbr based on that 10gbe port in order to see if a VM I will create based on that vmbr->10g be port will grant net access to the VM.

New edit. It does have net access. Still don t have a way to check the 10g connection.
-Do I have to change the option Jumbo Packet to 9000 from current 1514 in the properties of VirtIO ethernet adapter inside windows
-Do I have to set the MTU option to 9000 in both eno1 (10g) port and vmbr10 bridge as well?
-If I create a second vmbr bridge based on the second 10g port and create a VM based on that bridge, create afterwards a share folder between
the two VMs and watch the transfer speed, would proxmox transfer data from VM1 to the first 10g port out of proxmox/server to the switch
and back from the switch to the second 10g port and end up in the second VM? Shouldn t then be able to watch the transfer speeds?


PS Still don t know if I can use the 10g port despite it's active state because I have connected the server to one of the 4 uplink switch's 10g ports (all the other ports are 1g)
i know that uplink ports are being used for switch to switch/router...etc connections, but inside the switch's menu could nt find an option to declare that specific port as access port or something.
 
Last edited:
.......................... None??? Really?
please do not bump your thread without either more information or a concrete question..

AFAIU your post, you are unsure if the 10g ports work?

there should not be any difference in the handling of 10g ports/nics in contrast to 1g ports/nics so the usual linux (debian) network configuration applies

in the installer we do only add the 'auto' to the vmbr we create, any other configuration was not done automatically

to use a nic/port you have to give it an ip or add it as a subinterface to a bridge

if you want to test the speed i would recommend something like iperf or iperf3

if there are any specific questions left, please ask
 
please do not bump your thread without either more information or a concrete question..
... I have no extra info since all needed is given. Sorry for the bump still though someone answered. Wrong but sometimes work as a reply initiator.

Apart from that.......
you are unsure if the 10g ports work?
They work since there is led activity and also created a vmbr based on that port and the VM based on that vmbr has net.
there should not be any difference in the handling of 10g ports/nics
...for starters the MTU set up to a value of 9000 instead of 1500 is a difference

in the installer we do only add the 'auto' to the vmbr we create, any other configuration was not done automatically
Didn t know that.

to use a nic/port you have to give it an ip or add it as a subinterface to a bridge
The first part of your answer doesnt apply in general since my vmbr1 (no ip in the options) is based on bond1 ((no ip in the options, just filled participant ports in LACP mode) and that bond1 is based on 2 ports (each one with nothing setup in each options) The Vm which is based on that vmbr1 has net The VM takes the IP of a DHCP server so did within your answer meant <<to use a port for a VM which cant get net configuration from anywhere else?>> . i didn t specify anywhere an IP address

if you want to test the speed i would recommend something like iperf or iperf3

if there are any specific questions left, please ask
Yes my initial one. How to setup proxmox in a way that it sends the backups t oa remote server when those servers are connected via a 10gbe connection. None has a guide on that here or tube or blogspots except just talk about it as a general idea which is not helpful practical.

By any means my tone here is offensive it is just a frustration from my part that there are no guides for simple things like that. If you are aware of a guide like this or know please refer to it or give your insight as detailed as possible.

Thank you in advance
 
...for starters the MTU set up to a value of 9000 instead of 1500 is a difference
mtu has nothing to do if the card is 10g or 1g though...
(i can happily set the mtu to 9000 on my 1g card)

The first part of your answer doesnt apply in general since my vmbr1 (no ip in the options) is based on bond1 ((no ip in the options, just filled participant ports in LACP mode) and that bond1 is based on 2 ports (each one with nothing setup in each options) The Vm which is based on that vmbr1 has net The VM takes the IP of a DHCP server so did within your answer meant <<to use a port for a VM which cant get net configuration from anywhere else?>> . i didn t specify anywhere an IP address
yes then its a subinterface of a bridge (through a bond) if i understand correctly

Yes my initial one. How to setup proxmox in a way that it sends the backups t oa remote server when those servers are connected via a 10gbe connection. None has a guide on that here or tube or blogspots except just talk about it as a general idea which is not helpful practical.
basically you need to setup your network so that the route that is chosen uses the 10g nic

this can be done e.g. by having an ip in the same subnet as the target server configured on the 10g nic (or the bridge)

how this has to be set up depends largely on your network layout and no general answer can be given here
but maybe the network documentation can help you there: https://pve.proxmox.com/wiki/Network_Configuration
 
(i can happily set the mtu to 9000 on my 1g card)
you can but it wont have any effect

yes then its a subinterface of a bridge (through a bond) if i understand correctly
I know that it works I don t know why though.

but maybe the network documentation can help you there: https://pve.proxmox.com/wiki/Network_Configuration
Already seen it

this can be done e.g. by having an ip in the same subnet as the target server configured on the 10g nic (or the bridge)
This is what I am thinking of ......
1.On the server with the VMs (Lets call it prox1) I ll set up a vmbr based on that 10g port (it is a dual port nic card but it is irrelevant) and setup only an ip address with no gateway like 10.10.100.5/24
2.On the second server (eg Prox2 ) where the VMed TrueNas exists, I ll configure the 10g port again on a vmbr and set an ip address 10.10.100.6/24 (no gateway again).
Since there is a direct connection between them I dont need to configure routes on the switch or the router.
Ok all these were the easy part. How to dictate now prox1 to use the 10g road 10.10.100.0 instead of it's management road 192.168.10.0 to move backups directly to the remote location (prox2)?
The way I am describing it now it seems I m going to answer my self that there is no need to configure anything else since upon the procedure of adding that remote storage to the node, I will have already dictate the 10g road while I am entering the prox2 ip where the remote share /iscsi/storage exists. Am I right?
 
you can but it wont have any effect
sorry but this is simply not true, "jumbo frames" were invented in 1998 for gigabit cards... see e.g. https://en.wikipedia.org/wiki/Jumbo_frame#Inception

1.On the server with the VMs (Lets call it prox1) I ll set up a vmbr based on that 10g port (it is a dual port nic card but it is irrelevant) and setup only an ip address with no gateway like 10.10.100.5/24
2.On the second server (eg Prox2 ) where the VMed TrueNas exists, I ll configure the 10g port again on a vmbr and set an ip address 10.10.100.6/24 (no gateway again).
Since there is a direct connection between them I dont need to configure routes on the switch or the router.
Ok all these were the easy part. How to dictate now prox1 to use the 10g road 10.10.100.0 instead of it's management road 192.168.10.0 to move backups directly to the remote location (prox2)?
The way I am describing it now it seems I m going to answer my self that there is no need to configure anything else since upon the procedure of adding that remote storage to the node, I will have already dictate the 10g road while I am entering the prox2 ip where the remote share /iscsi/storage exists. Am I right?
yes the linux kernel will use the route most specific for the target address according to its routing table. and for each configured ip address there will be an automatic entry for that subnet on the device
you can view the routes with 'ip route show' and you can see the route for a specific target address with 'ip route get <ip-address>'
for example:

Code:
ip route get 8.8.8.8
Code:
8.8.8.8 via 192.168.X.Y dev vmbr0 src 192.168.X.Z uid 1000
    cache
 
sorry but this is simply not true, "jumbo frames" were invented in 1998 for gigabit cards... see e.g. https://en.wikipedia.org/wiki/Jumbo_frame#Inception
Ok you re right.. probably I was thinking of something else. Nice info though for the CRC errors and the way to counter this issue. It seems then that in year 2021 it is pointless to leave it at default 1500 value since most if not all of the todays network equipment supports that feature.

That MTU options is available on port options / bond options and vmbr as well . Do you have to enable it in each step or the final one which will be the vmbr (I am talking about the case where you have a vmbr based on a bond and the bond based on a port)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!