Default Ice driver Kernel 6 poor performance

Jun 3, 2020
2
2
23
37
The native Kernel ICE driver performs poorly compared to the driver provided by Intel. Just take a look at the simple tests I conducted (hardware is the same). The only difference is the driver.


Whenever the kernel updates, it's a hassle to manually load the driver. Is there any possibility of including the updated driver in the kernel by default? It would save a lot of work.

Thank you.


Server with Intel ICE Driver : https://www.intel.com/content/www/u...iver-for-e810-series-devices-under-linux.html

root@SV5:~# modinfo ice
filename: /lib/modules/6.2.16-12-pve/updates/drivers/net/ethernet/intel/ice/ice.ko
firmware: intel/ice/ddp/ice.pkg
version: 1.12.6
license: GPL v2
description: Intel(R) Ethernet Connection E800 Series Linux Driver
author: Intel Corporation, <linux.nics@intel.com>
srcversion: 11D72A85020BA96FBC4D15D
alias: pci:v00008086d00001888sv*sd*bc*sc*i*
alias: pci:v00008086d0000579Fsv*sd*bc*sc*i*
alias: pci:v00008086d0000579Esv*sd*bc*sc*i*
alias: pci:v00008086d0000579Dsv*sd*bc*sc*i*
alias: pci:v00008086d0000579Csv*sd*bc*sc*i*
alias: pci:v00008086d0000151Dsv*sd*bc*sc*i*
alias: pci:v00008086d0000124Fsv*sd*bc*sc*i*
alias: pci:v00008086d0000124Esv*sd*bc*sc*i*
alias: pci:v00008086d0000124Dsv*sd*bc*sc*i*
alias: pci:v00008086d0000124Csv*sd*bc*sc*i*
alias: pci:v00008086d0000189Asv*sd*bc*sc*i*
alias: pci:v00008086d00001899sv*sd*bc*sc*i*
alias: pci:v00008086d00001898sv*sd*bc*sc*i*
alias: pci:v00008086d00001897sv*sd*bc*sc*i*
alias: pci:v00008086d00001894sv*sd*bc*sc*i*
alias: pci:v00008086d00001893sv*sd*bc*sc*i*
alias: pci:v00008086d00001892sv*sd*bc*sc*i*
alias: pci:v00008086d00001891sv*sd*bc*sc*i*
alias: pci:v00008086d00001890sv*sd*bc*sc*i*
alias: pci:v00008086d0000188Esv*sd*bc*sc*i*
alias: pci:v00008086d0000188Dsv*sd*bc*sc*i*
alias: pci:v00008086d0000188Csv*sd*bc*sc*i*
alias: pci:v00008086d0000188Bsv*sd*bc*sc*i*
alias: pci:v00008086d0000188Asv*sd*bc*sc*i*
alias: pci:v00008086d0000159Bsv*sd*bc*sc*i*
alias: pci:v00008086d0000159Asv*sd*bc*sc*i*
alias: pci:v00008086d00001599sv*sd*bc*sc*i*
alias: pci:v00008086d00001593sv*sd*bc*sc*i*
alias: pci:v00008086d00001592sv*sd*bc*sc*i*
alias: pci:v00008086d00001591sv*sd*bc*sc*i*
depends: gnss
retpoline: Y
name: ice
vermagic: 6.2.16-12-pve SMP preempt mod_unload modversions
parm: debug:netif level (0=none,...,16=all) (int)
parm: fwlog_level:FW event level to log. All levels <= to the specified value are enabled. Values: 0=none, 1=error, 2=warning, 3=normal, 4=verbose. Invalid values: >=5
(ushort)
parm: fwlog_events:FW events to log (32-bit mask)
(ulong)

Iperf Results :

root@SV6:~# iperf -c SV5 -T s1 -P 12 -l 32768 -w 128M -R -t 15
(...)
[SUM] 0.0000-15.1985 sec 172 GBytes 97.4 Gbits/sec

root@SV6:~# iperf -c SV5 -T s1 -P 12 -l 32768 -w 128M -R -t 15 -R
(...)
[SUM] 0.0000-15.0041 sec 164 GBytes 93.9 Gbits/sec





Default Kernel ICE Driver.

root@SV4:~# modinfo ice
filename: /lib/modules/6.2.16-12-pve/kernel/drivers/net/ethernet/intel/ice/ice.ko
firmware: intel/ice/ddp/ice.pkg
license: GPL v2
description: Intel(R) Ethernet Connection E800 Series Linux Driver
author: Intel Corporation, <linux.nics@intel.com>
srcversion: 9DF0E1DCCF2DFF66023E4E7
alias: pci:v00008086d00001888sv*sd*bc*sc*i*
alias: pci:v00008086d0000151Dsv*sd*bc*sc*i*
alias: pci:v00008086d0000124Fsv*sd*bc*sc*i*
alias: pci:v00008086d0000124Esv*sd*bc*sc*i*
alias: pci:v00008086d0000124Dsv*sd*bc*sc*i*
alias: pci:v00008086d0000124Csv*sd*bc*sc*i*
alias: pci:v00008086d0000189Asv*sd*bc*sc*i*
alias: pci:v00008086d00001899sv*sd*bc*sc*i*
alias: pci:v00008086d00001898sv*sd*bc*sc*i*
alias: pci:v00008086d00001897sv*sd*bc*sc*i*
alias: pci:v00008086d00001894sv*sd*bc*sc*i*
alias: pci:v00008086d00001893sv*sd*bc*sc*i*
alias: pci:v00008086d00001892sv*sd*bc*sc*i*
alias: pci:v00008086d00001891sv*sd*bc*sc*i*
alias: pci:v00008086d00001890sv*sd*bc*sc*i*
alias: pci:v00008086d0000188Esv*sd*bc*sc*i*
alias: pci:v00008086d0000188Dsv*sd*bc*sc*i*
alias: pci:v00008086d0000188Csv*sd*bc*sc*i*
alias: pci:v00008086d0000188Bsv*sd*bc*sc*i*
alias: pci:v00008086d0000188Asv*sd*bc*sc*i*
alias: pci:v00008086d0000159Bsv*sd*bc*sc*i*
alias: pci:v00008086d0000159Asv*sd*bc*sc*i*
alias: pci:v00008086d00001599sv*sd*bc*sc*i*
alias: pci:v00008086d00001593sv*sd*bc*sc*i*
alias: pci:v00008086d00001592sv*sd*bc*sc*i*
alias: pci:v00008086d00001591sv*sd*bc*sc*i*
depends:
retpoline: Y
intree: Y
name: ice
vermagic: 6.2.16-12-pve SMP preempt mod_unload modversions
sig_id: PKCS#7
signer: Build time autogenerated kernel key
sig_key: 3C:77:A0:CB:73:A7:08:28:ED:35:3F:65:C9:6B:95:4A:A6:7F:F4:DC
sig_hashalgo: sha512
signature: B2:57:3A:D8:E0:CB:85:10:87:A4:8A:7C:8E:DB:E9:B9:2F:CC:28:B4:
46:F1:03:1E:74:74:ED:1B:C6:CD:B1:DB:B3:5E:E6:B9:3E:DB:EB:95:
A8:DF:7E:3B:5D:E8:EA:3F:DA:B5:A7:55:F9:32:FE:02:12:CE:C9:0B:
C9:83:11:37:79:9E:22:B2:8E:C7:BD:D8:85:00:C1:C8:79:1C:4E:D7:
C6:33:F5:63:9D:30:63:E5:73:72:D8:73:7C:34:1D:77:1C:43:7F:BA:
94:A4:82:23:9C:28:2A:3C:E4:6A:7E:07:BE:C0:4B:B2:07:97:AD:37:
23:47:36:F0:D8:D6:9C:66:30:6C:0F:E7:E3:4F:1F:26:3A:0F:2C:00:
AA:02:33:3E:27:AE:03:37:9C:01:B5:CB:72:54:61:E6:56:BD:96:44:
8B:0D:BB:58:3A:56:6F:B4:43:71:BB:73:AF:99:D2:05:D0:8F:5B:B5:
66:49:8D:E4:D1:B8:D7:FB:4E:D2:3E:19:03:F4:B1:9F:B4:46:70:3B:
CF:51:82:E9:20:DF:17:4F:8E:4A:68:73:15:2F:B6:1E:18:39:E4:E9:
69:D3:6E:7B:3D:08:B4:81:B1:FA:F3:6E:95:33:EE:95:A0:97:02:32:
CF:DF:86:4D:9F:B0:90:E5:9C:CD:31:00:E0:62:5D:F3:F5:65:97:3F:
57:E8:7A:D9:13:5E:DD:7A:08:DF:2B:6A:53:3E:EF:F6:14:4C:18:BB:
4C:6F:04:F1:F9:01:CC:DC:D1:0D:32:C3:6C:A8:8A:B7:20:53:5E:A0:
FC:06:8C:78:C3:0D:BF:14:D4:D7:00:89:A5:EA:78:F0:33:23:3D:D8:
27:15:21:1E:A8:98:20:84:9F:8D:1C:AF:B2:3D:AA:E7:AE:E3:65:27:
FD:C6:61:B0:B6:C6:D1:0C:B4:8A:26:82:2B:EA:4B:D2:F5:B2:29:BF:
4A:38:62:3A:C8:ED:39:2B:F3:CB:F9:77:40:DA:B1:BC:0A:37:75:6C:
E7:F1:F1:FD:B8:4B:F1:75:82:F6:E5:79:26:5C:19:14:92:AD:C9:EB:
C2:FE:B1:2C:EC:49:DD:7C:9F:1B:1C:A5:30:A3:54:07:A1:7B:05:D1:
95:98:AF:77:27:D3:4D:EA:15:6C:05:7F:BF:5A:25:A0:C1:38:96:3E:
39:B8:83:BC:A8:69:46:68:22:33:07:B8:27:19:09:E6:AE:EF:9E:D3:
07:37:42:4E:BA:96:D9:FA:11:03:C2:B5:F7:AC:58:B5:FC:F3:3E:73:
FA:2F:8B:81:03:3C:69:2B:A7:B7:D1:00:1D:2B:E1:19:81:A7:AA:83:
C2:4B:9C:5E:4A:99:D1:36:7B:66:00:A3
parm: debug:netif level (0=none,...,16=all) (int)


root@SV6:~# iperf -c SV4 -T s1 -P 12 -l 32768 -w 128M -R -t 15
(...)
[SUM] 0.0000-15.0192 sec 39.4 GBytes 22.5 Gbits/sec

root@SV6:~# iperf -c SV4 -T s1 -P 12 -l 32768 -w 128M -R -t 15 -R
(...)
[SUM] 0.0000-15.0223 sec 39.8 GBytes 22.8 Gbits/sec
 
  • Like
Reactions: skkostas
Hi,

I'm also facing poor performance with my 100Gbit Intel E810 card with default driver.
To confirm it's related to the driver, I would like to give a try to the Intel driver.

Could you please explain how you build (I had some errors about kernel devel), then install the Intel driver?

Thank you!
 
Hi,

I'm also facing poor performance with my 100Gbit Intel E810 card with default driver.
To confirm it's related to the driver, I would like to give a try to the Intel driver.

Could you please explain how you build (I had some errors about kernel devel), then install the Intel driver?

Thank you!
Hello

did you find the solution? if not inform me to help you! I fixed the issue for me
 
Hello

did you find the solution? if not inform me to help you! I fixed the issue for me
Hello,

No, I was pretty busy since my post, and could not handle this.
If you have the way to proceed and can share it, I would be happy to have a try!

Thanks
 
My System
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.11-8-pve)
pve-manager: 8.1.3 (running version: 8.1.3/b46aac3b42da5d15)
proxmox-kernel-helper: 8.1.0
pve-kernel-6.2: 8.0.5
proxmox-kernel-6.5: 6.5.11-8
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.2.16-20-pve: 6.2.16-20
proxmox-kernel-6.2: 6.2.16-20
pve-kernel-6.2.16-4-pve: 6.2.16-5
pve-kernel-6.2.16-3-pve: 6.2.16-3
ceph: 17.2.7-pve1
ceph-fuse: 17.2.7-pve1
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx7
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.7
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.2-1
proxmox-backup-file-restore: 3.1.2-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.2
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-2
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.1.5
pve-qemu-kvm: 8.1.2-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1

need to do it at all nodes of the cluster
  1. download the following link
  2. apt-get install pve-headers
  3. restart the node
  4. apt install make cmake
  5. tar zfx ice-1.13.7.tar.gz
  6. cd ice-1.13.7/src
  7. make install
  8. reboot
  9. modinfo ice
My ice mod export after update and reboot
Code:
filename:       /lib/modules/6.5.11-8-pve/updates/drivers/net/ethernet/intel/ice/ice.ko
firmware:       intel/ice/ddp/ice.pkg
version:        1.13.7
license:        GPL v2
description:    Intel(R) Ethernet Connection E800 Series Linux Driver
author:         Intel Corporation, <linux.nics@intel.com>
srcversion:     3EB4549D125EAA59F08D047
alias:          pci:v00008086d00001888sv*sd*bc*sc*i*
alias:          pci:v00008086d0000579Fsv*sd*bc*sc*i*
alias:          pci:v00008086d0000579Esv*sd*bc*sc*i*
alias:          pci:v00008086d0000579Dsv*sd*bc*sc*i*
alias:          pci:v00008086d0000579Csv*sd*bc*sc*i*
alias:          pci:v00008086d0000151Dsv*sd*bc*sc*i*
alias:          pci:v00008086d0000124Fsv*sd*bc*sc*i*
alias:          pci:v00008086d0000124Esv*sd*bc*sc*i*
alias:          pci:v00008086d0000124Dsv*sd*bc*sc*i*
alias:          pci:v00008086d0000124Csv*sd*bc*sc*i*
alias:          pci:v00008086d0000189Asv*sd*bc*sc*i*
alias:          pci:v00008086d00001899sv*sd*bc*sc*i*
alias:          pci:v00008086d00001898sv*sd*bc*sc*i*
alias:          pci:v00008086d00001897sv*sd*bc*sc*i*
alias:          pci:v00008086d00001894sv*sd*bc*sc*i*
alias:          pci:v00008086d00001893sv*sd*bc*sc*i*
alias:          pci:v00008086d00001892sv*sd*bc*sc*i*
alias:          pci:v00008086d00001891sv*sd*bc*sc*i*
alias:          pci:v00008086d00001890sv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Esv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Dsv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Csv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Bsv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Asv*sd*bc*sc*i*
alias:          pci:v00008086d0000159Bsv*sd*bc*sc*i*
alias:          pci:v00008086d0000159Asv*sd*bc*sc*i*
alias:          pci:v00008086d00001599sv*sd*bc*sc*i*
alias:          pci:v00008086d00001593sv*sd*bc*sc*i*
alias:          pci:v00008086d00001592sv*sd*bc*sc*i*
alias:          pci:v00008086d00001591sv*sd*bc*sc*i*
depends:        gnss
retpoline:      Y
name:           ice
vermagic:       6.5.11-7-pve SMP preempt mod_unload modversions
parm:           debug:netif level (0=none,...,16=all) (int)
parm:           fwlog_level:FW event level to log. All levels <= to the specified value are enabled. Values: 0=none, 1=error, 2=warning, 3=normal, 4=verbose. Invalid values: >=5
 (ushort)
parm:           fwlog_events:FW events to log (32-bit mask)
 (long)
 
Thank you! After your instructions, my modinfo ice shows exactly the same version as yours on my both PVE test nodes (version: 1.13.7 and srcversion: 3EB4549D125EAA59F08D047) so I guess I'm now running the Intel ICE driver

For now I cannot see any improvement on my 100Gbit link. I'm using a signle 100Gb direct attach cable from pve1 to pve2 to avoid any other interactions (no switch, the DA is plugged from the E810 on pve1 to the other E810 on pve2)

When running iperf -c pve2 -T s1 -P 12 -l 32768 -w 128M -R -t 15

I'm getting only ~44 Gbits/sec with 12 threads, detail below

Code:
[ ID] Interval       Transfer     Bandwidth
[ *8] 0.0000-15.0126 sec  6.14 GBytes  3.52 Gbits/sec
[ *2] 0.0000-15.0109 sec  6.55 GBytes  3.75 Gbits/sec
[ *1] 0.0000-15.0072 sec  6.80 GBytes  3.89 Gbits/sec
[ *5] 0.0000-15.0078 sec  5.91 GBytes  3.39 Gbits/sec
[ *7] 0.0000-15.0120 sec  6.40 GBytes  3.66 Gbits/sec
[ *3] 0.0000-15.0079 sec  7.23 GBytes  4.14 Gbits/sec
[ *6] 0.0000-15.0074 sec  6.53 GBytes  3.74 Gbits/sec
[*12] 0.0000-15.0132 sec  6.59 GBytes  3.77 Gbits/sec
[ *9] 0.0000-15.0111 sec  5.68 GBytes  3.25 Gbits/sec
[*10] 0.0000-15.0193 sec  6.91 GBytes  3.95 Gbits/sec
[*11] 0.0000-15.0190 sec  6.28 GBytes  3.59 Gbits/sec
[ *4] 0.0000-15.0185 sec  5.54 GBytes  3.17 Gbits/sec
[SUM] 0.0000-15.0191 sec  76.6 GBytes  43.8 Gbits/sec

Two additional comments:
- I rebooted after the make install command
- I was getting the same king of speed with the previous drivers (kernel version)

Any idea?
 
Last edited:
Thank you! After your instructions, my modinfo ice shows exactly the same version as yours on my both PVE test nodes (version: 1.13.7 and srcversion: 3EB4549D125EAA59F08D047) so I guess I'm now running the Intel ICE driver

For now I cannot see any improvement on my 100Gbit link. I'm using a signle 100Gb direct attach cable from pve1 to pve2 to avoid any other interactions (no switch, the DA is plugged from the E810 on pve1 to the other E810 on pve2)

When running iperf -c pve2 -T s1 -P 12 -l 32768 -w 128M -R -t 15

I'm getting only ~44 Gbits/sec with 12 threads, detail below

Code:
[ ID] Interval       Transfer     Bandwidth
[ *8] 0.0000-15.0126 sec  6.14 GBytes  3.52 Gbits/sec
[ *2] 0.0000-15.0109 sec  6.55 GBytes  3.75 Gbits/sec
[ *1] 0.0000-15.0072 sec  6.80 GBytes  3.89 Gbits/sec
[ *5] 0.0000-15.0078 sec  5.91 GBytes  3.39 Gbits/sec
[ *7] 0.0000-15.0120 sec  6.40 GBytes  3.66 Gbits/sec
[ *3] 0.0000-15.0079 sec  7.23 GBytes  4.14 Gbits/sec
[ *6] 0.0000-15.0074 sec  6.53 GBytes  3.74 Gbits/sec
[*12] 0.0000-15.0132 sec  6.59 GBytes  3.77 Gbits/sec
[ *9] 0.0000-15.0111 sec  5.68 GBytes  3.25 Gbits/sec
[*10] 0.0000-15.0193 sec  6.91 GBytes  3.95 Gbits/sec
[*11] 0.0000-15.0190 sec  6.28 GBytes  3.59 Gbits/sec
[ *4] 0.0000-15.0185 sec  5.54 GBytes  3.17 Gbits/sec
[SUM] 0.0000-15.0191 sec  76.6 GBytes  43.8 Gbits/sec

Two additional comments:
- I rebooted after the make install command
- I was getting the same king of speed with the previous drivers (kernel version)

Any idea?
please post
Code:
pveversion -v , uname -a and modinfo ice
output after you reboot both nodes
 
Last edited:
On a somewhat related note, Ive been working with Intel support on the E810 with Ubuntu 20.04 systems and a variety of... interesting issues.

The inbox driver (that comes default with 20.04) is clearly very lacking. Also getting the firmware updated on the cards as well has been a challenge as 20.04 doesn't apparently support "devlink region".

With current firmware... 4.40 and the latest driver 1.13.7, we have had an ok experience. The 1.12.7 driver I think is okay too but the new driver (1.13.7) and the 4.40 firmware just came out around the holidays.

We had issues with bonding setup (MLAG) and card falling off the bus resetting. If you aren't current on firmware (along with the updated driver) I think it would hopefully help.

We have Mellanox for our Proxmox nodes and have had a lovely experience so far with those. I think the Intel stuff is alright (assuming latest driver and firmware). Managing a non-inbox driver (even on Ubuntu) isn't a ton of fun.
 
pveversion -v result:
Code:
proxmox-ve: 8.1.0 (running kernel: 6.5.11-8-pve)
pve-manager: 8.1.4 (running version: 8.1.4/ec5affc9e41f1d79)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.5: 6.5.11-8
proxmox-kernel-6.5.11-8-pve-signed: 6.5.11-8
proxmox-kernel-6.5.11-7-pve-signed: 6.5.11-7
proxmox-kernel-6.5.11-4-pve-signed: 6.5.11-4
ceph: 18.2.1-pve2
ceph-fuse: 18.2.1-pve2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.0
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.0.7
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.1.0
libpve-guest-common-perl: 5.0.6
libpve-http-server-perl: 5.0.5
libpve-network-perl: 0.9.5
libpve-rs-perl: 0.8.8
libpve-storage-perl: 8.0.5
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve4
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.1.4-1
proxmox-backup-file-restore: 3.1.4-1
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.4
proxmox-widget-toolkit: 4.1.3
pve-cluster: 8.0.5
pve-container: 5.0.8
pve-docs: 8.1.3
pve-edk2-firmware: 4.2023.08-3
pve-firewall: 5.0.3
pve-firmware: 3.9-1
pve-ha-manager: 4.0.3
pve-i18n: 3.2.0
pve-qemu-kvm: 8.1.5-1
pve-xtermjs: 5.3.0-3
qemu-server: 8.0.10
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.2-pve1

uname -a result:
Code:
Linux pve91 6.5.11-8-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.11-8 (2024-01-30T12:27Z) x86_64 GNU/Linux

modinfo ice result:
Code:
filename:       /lib/modules/6.5.11-8-pve/updates/drivers/net/ethernet/intel/ice/ice.ko
firmware:       intel/ice/ddp/ice.pkg
version:        1.13.7
license:        GPL v2
description:    Intel(R) Ethernet Connection E800 Series Linux Driver
author:         Intel Corporation, <linux.nics@intel.com>
srcversion:     3EB4549D125EAA59F08D047
alias:          pci:v00008086d00001888sv*sd*bc*sc*i*
alias:          pci:v00008086d0000579Fsv*sd*bc*sc*i*
alias:          pci:v00008086d0000579Esv*sd*bc*sc*i*
alias:          pci:v00008086d0000579Dsv*sd*bc*sc*i*
alias:          pci:v00008086d0000579Csv*sd*bc*sc*i*
alias:          pci:v00008086d0000151Dsv*sd*bc*sc*i*
alias:          pci:v00008086d0000124Fsv*sd*bc*sc*i*
alias:          pci:v00008086d0000124Esv*sd*bc*sc*i*
alias:          pci:v00008086d0000124Dsv*sd*bc*sc*i*
alias:          pci:v00008086d0000124Csv*sd*bc*sc*i*
alias:          pci:v00008086d0000189Asv*sd*bc*sc*i*
alias:          pci:v00008086d00001899sv*sd*bc*sc*i*
alias:          pci:v00008086d00001898sv*sd*bc*sc*i*
alias:          pci:v00008086d00001897sv*sd*bc*sc*i*
alias:          pci:v00008086d00001894sv*sd*bc*sc*i*
alias:          pci:v00008086d00001893sv*sd*bc*sc*i*
alias:          pci:v00008086d00001892sv*sd*bc*sc*i*
alias:          pci:v00008086d00001891sv*sd*bc*sc*i*
alias:          pci:v00008086d00001890sv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Esv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Dsv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Csv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Bsv*sd*bc*sc*i*
alias:          pci:v00008086d0000188Asv*sd*bc*sc*i*
alias:          pci:v00008086d0000159Bsv*sd*bc*sc*i*
alias:          pci:v00008086d0000159Asv*sd*bc*sc*i*
alias:          pci:v00008086d00001599sv*sd*bc*sc*i*
alias:          pci:v00008086d00001593sv*sd*bc*sc*i*
alias:          pci:v00008086d00001592sv*sd*bc*sc*i*
alias:          pci:v00008086d00001591sv*sd*bc*sc*i*
depends:        gnss
retpoline:      Y
name:           ice
vermagic:       6.5.11-8-pve SMP preempt mod_unload modversions
parm:           debug:netif level (0=none,...,16=all) (int)
parm:           fwlog_level:FW event level to log. All levels <= to the specified value are enabled. Values: 0=none, 1=error, 2=warning, 3=normal, 4=verbose. Invalid values: >=5
 (ushort)
parm:           fwlog_events:FW events to log (32-bit mask)
 (ulong)
 
I forgot to mention but that could be important: hardware is DELL R7515 servers
The E810 is the DELL version (ref 0DWNRF), with the latest firmware installed (the one provided by DELL, means v22.5.7, published on 01 Dec 2023)

I will also double check my copper DAC (brand FS) because I realized they are a "double compatibility" setup, Intel/Dell, so I guess that could be also part of the problem
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!