Hi
Noticed today that by default after installation of prox, if you go to node->Network you can see all port's active status as Yes (nothing unusual here). Since though the network card of the server has 4 ports (2 of which are 1gbe and the other 2 are 10gbe), the 1gbe ports have active status Yes while the 10gbe ports as No. I was trying to figure out why this happens. Checked Server's bios and for that specific port of the onboard nic intel 10G 4P X520/I350 rNDC options were
Legacy boot protocol (set as none, was PXE)
NONE
PXE
iSCSI Primary
iSCSI Secondary
FCoE
TCP/IP Parameters via DHCP-> set to enabled (was disabled)
iSCSI Parameters via DHCP->set to enabled (was disabled)
Virtualization Mode Set to SR-ION (was none)
SR-ION
None
Even though I know what these parameters do and have nothing to do with why proxmox see the port as not active, I was trying stuff from hardware perspective first.
At the same time though I went to /etc/network/interfaces and noticed that each port had
auto <name_of_the_port>
iface <name_of_the_port> inet manual
That was not true for the 10gbe ports since they had only the iface <name_of_the_port> inet manual
When I set the auto <name_of_the_port> and restarted the network service with ifreload -a and refreshed network gui section again the 10gbe ports were set to Active state. At the same time though I had created a vmbr bridge calling that port so is there a rule on proxmox setting a port active only if it being used by a vmbr or bond?
Does proxmox checks something in the hardware section of the machine in order to determine if it is going to set a port as active or not?
Does proxmox need a specific configuration for 10gbe ports?
Currently I created a vmbr based on that 10gbe port in order to see if a VM I will create based on that vmbr->10g be port will grant net access to the VM.
New edit. It does have net access. Still don t have a way to check the 10g connection.
-Do I have to change the option Jumbo Packet to 9000 from current 1514 in the properties of VirtIO ethernet adapter inside windows
-Do I have to set the MTU option to 9000 in both eno1 (10g) port and vmbr10 bridge as well?
-If I create a second vmbr bridge based on the second 10g port and create a VM based on that bridge, create afterwards a share folder between
the two VMs and watch the transfer speed, would proxmox transfer data from VM1 to the first 10g port out of proxmox/server to the switch
and back from the switch to the second 10g port and end up in the second VM? Shouldn t then be able to watch the transfer speeds?
PS Still don t know if I can use the 10g port despite it's active state because I have connected the server to one of the 4 uplink switch's 10g ports (all the other ports are 1g)
i know that uplink ports are being used for switch to switch/router...etc connections, but inside the switch's menu could nt find an option to declare that specific port as access port or something.
Noticed today that by default after installation of prox, if you go to node->Network you can see all port's active status as Yes (nothing unusual here). Since though the network card of the server has 4 ports (2 of which are 1gbe and the other 2 are 10gbe), the 1gbe ports have active status Yes while the 10gbe ports as No. I was trying to figure out why this happens. Checked Server's bios and for that specific port of the onboard nic intel 10G 4P X520/I350 rNDC options were
Legacy boot protocol (set as none, was PXE)
NONE
PXE
iSCSI Primary
iSCSI Secondary
FCoE
TCP/IP Parameters via DHCP-> set to enabled (was disabled)
iSCSI Parameters via DHCP->set to enabled (was disabled)
Virtualization Mode Set to SR-ION (was none)
SR-ION
None
Even though I know what these parameters do and have nothing to do with why proxmox see the port as not active, I was trying stuff from hardware perspective first.
At the same time though I went to /etc/network/interfaces and noticed that each port had
auto <name_of_the_port>
iface <name_of_the_port> inet manual
That was not true for the 10gbe ports since they had only the iface <name_of_the_port> inet manual
When I set the auto <name_of_the_port> and restarted the network service with ifreload -a and refreshed network gui section again the 10gbe ports were set to Active state. At the same time though I had created a vmbr bridge calling that port so is there a rule on proxmox setting a port active only if it being used by a vmbr or bond?
Does proxmox checks something in the hardware section of the machine in order to determine if it is going to set a port as active or not?
Does proxmox need a specific configuration for 10gbe ports?
Currently I created a vmbr based on that 10gbe port in order to see if a VM I will create based on that vmbr->10g be port will grant net access to the VM.
New edit. It does have net access. Still don t have a way to check the 10g connection.
-Do I have to change the option Jumbo Packet to 9000 from current 1514 in the properties of VirtIO ethernet adapter inside windows
-Do I have to set the MTU option to 9000 in both eno1 (10g) port and vmbr10 bridge as well?
-If I create a second vmbr bridge based on the second 10g port and create a VM based on that bridge, create afterwards a share folder between
the two VMs and watch the transfer speed, would proxmox transfer data from VM1 to the first 10g port out of proxmox/server to the switch
and back from the switch to the second 10g port and end up in the second VM? Shouldn t then be able to watch the transfer speeds?
PS Still don t know if I can use the 10g port despite it's active state because I have connected the server to one of the 4 uplink switch's 10g ports (all the other ports are 1g)
i know that uplink ports are being used for switch to switch/router...etc connections, but inside the switch's menu could nt find an option to declare that specific port as access port or something.
Last edited: