Kernel 4.15.18-9 incompatible with 10 Gbps card

Discussion in 'Proxmox VE: Installation and configuration' started by tuantv, Jan 5, 2019.

  1. tuantv

    tuantv New Member

    Joined:
    Jan 5, 2019
    Messages:
    15
    Likes Received:
    0
    I am installing the latest version of proxmox and encountered a 10Gb "linkdown" error message as below. Someone with experience would like to support me.
     

    Attached Files:

    • 212.PNG
      212.PNG
      File size:
      13.4 KB
      Views:
      21
  2. elurex

    elurex Member
    Proxmox Subscriber

    Joined:
    Oct 28, 2015
    Messages:
    149
    Likes Received:
    4
    what nic model?
     
  3. tuantv

    tuantv New Member

    Joined:
    Jan 5, 2019
    Messages:
    15
    Likes Received:
    0
    Thank for reply, my card is "ntel 10gb Ethernet Converged Network Adapter X520 Dual Port E69818".
     
  4. tuantv

    tuantv New Member

    Joined:
    Jan 5, 2019
    Messages:
    15
    Likes Received:
    0
    Does someone help me with this problem?
     
  5. joshin

    joshin Member
    Proxmox Subscriber

    Joined:
    Jul 23, 2013
    Messages:
    92
    Likes Received:
    8
    Did you install the firmware for it?
     
  6. elurex

    elurex Member
    Proxmox Subscriber

    Joined:
    Oct 28, 2015
    Messages:
    149
    Likes Received:
    4
    tuantv

    I have no such issue
    upload_2019-1-8_11-46-15.png

    Code:
    root@pve3:~# pveversion -v
    proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
    pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
    pve-kernel-4.15: 5.2-12
    pve-kernel-4.15.18-9-pve: 4.15.18-30
    pve-kernel-4.15.18-7-pve: 4.15.18-27
    pve-kernel-4.15.18-1-pve: 4.15.18-19
    pve-kernel-4.15.17-3-pve: 4.15.17-14
    pve-kernel-4.15.17-1-pve: 4.15.17-9
    ceph: 12.2.8-pve1
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-3
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-43
    libpve-guest-common-perl: 2.0-18
    libpve-http-server-perl: 2.0-11
    libpve-storage-perl: 5.0-33
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.0.2+pve1-5
    lxcfs: 3.0.2-2
    novnc-pve: 1.0.0-2
    openvswitch-switch: 2.7.0-3
    proxmox-widget-toolkit: 1.0-22
    pve-cluster: 5.0-31
    pve-container: 2.0-31
    pve-docs: 5.3-1
    pve-edk2-firmware: 1.20181023-1
    pve-firewall: 3.0-16
    pve-firmware: 2.0-6
    pve-ha-manager: 2.0-5
    pve-i18n: 1.0-9
    pve-libspice-server1: 0.14.1-1
    pve-qemu-kvm: 2.12.1-1
    pve-xtermjs: 1.0-5
    qemu-server: 5.0-43
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.12-pve1~bpo1
     
  7. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,198
    Likes Received:
    102
    @tuantv: check the output of `dmesg` (potentially after removing the ixgbe module and reinserting it again (with rmmod/modprobe))

    usually it should give you a hint - apart from that apply all firmware updates on the system (and on the NIC as well if applicable)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  8. tuantv

    tuantv New Member

    Joined:
    Jan 5, 2019
    Messages:
    15
    Likes Received:
    0
    Here are the parameters on my proxmox, please support.

    Code:
    root@node03:~# ethtool -i enp4s0f1 | grep firmware-version
    firmware-version: 0x30030001
    root@node03:~# lspci | grep SFP
    04:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
    04:00.1 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
    root@node03:~# ip r
    default via 103.254.12.1 dev eno1 onlink
    10.200.0.0/16 dev enp4s0f1 proto kernel scope link src 10.200.23.254 linkdown
    103.254.12.0/25 dev eno1 proto kernel scope link src 103.254.12.23
    103.254.13.0/25 dev vmbr0 proto kernel scope link src 103.254.13.93
    103.254.14.0/25 dev vmbr1 proto kernel scope link src 103.254.14.93
    192.168.108.0/24 dev eno2 proto kernel scope link src 192.168.108.13
    root@node03:~# pveversion -v
    proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve)
    pve-manager: 5.3-5 (running version: 5.3-5/97ae681d)
    pve-kernel-4.15: 5.2-12
    pve-kernel-4.15.18-9-pve: 4.15.18-30
    corosync: 2.4.4-pve1
    criu: 2.11.1-1~bpo90
    glusterfs-client: 3.8.8-1
    ksm-control-daemon: 1.2-2
    libjs-extjs: 6.0.1-2
    libpve-access-control: 5.1-3
    libpve-apiclient-perl: 2.0-5
    libpve-common-perl: 5.0-43
    libpve-guest-common-perl: 2.0-18
    libpve-http-server-perl: 2.0-11
    libpve-storage-perl: 5.0-33
    libqb0: 1.0.3-1~bpo9
    lvm2: 2.02.168-pve6
    lxc-pve: 3.0.2+pve1-5
    lxcfs: 3.0.2-2
    novnc-pve: 1.0.0-2
    proxmox-widget-toolkit: 1.0-22
    pve-cluster: 5.0-31
    pve-container: 2.0-31
    pve-docs: 5.3-1
    pve-edk2-firmware: 1.20181023-1
    pve-firewall: 3.0-16
    pve-firmware: 2.0-6
    pve-ha-manager: 2.0-5
    pve-i18n: 1.0-9
    pve-libspice-server1: 0.14.1-1
    pve-qemu-kvm: 2.12.1-1
    pve-xtermjs: 1.0-5
    qemu-server: 5.0-43
    smartmontools: 6.5+svn4324-1
    spiceterm: 3.0-5
    vncterm: 1.5-3
    zfsutils-linux: 0.7.12-pve1~bpo1
    root@node03:~# dme
    dmesg     dmeventd
    root@node03:~# dmesg | grep enp4s0f1
    [    2.776473] ixgbe 0000:04:00.1 enp4s0f1: renamed from eth1
    [   14.308918] ixgbe 0000:04:00.1: registered PHC device on enp4s0f1
    [   14.416792] IPv6: ADDRCONF(NETDEV_UP): enp4s0f1: link is not ready
    [1538600.142842] ixgbe 0000:04:00.1: removed PHC on enp4s0f1
    [1538653.591932] ixgbe 0000:04:00.1: registered PHC device on enp4s0f1
    [1538653.699109] IPv6: ADDRCONF(NETDEV_UP): enp4s0f1: link is not ready
    [1538888.554902] ixgbe 0000:04:00.1: removed PHC on enp4s0f1
    [1538893.222490] ixgbe 0000:04:00.1: registered PHC device on enp4s0f1
    [1538893.329832] IPv6: ADDRCONF(NETDEV_UP): enp4s0f1: link is not ready
     
  9. sb-jw

    sb-jw Active Member

    Joined:
    Jan 23, 2018
    Messages:
    433
    Likes Received:
    37
    Please paste your interfaces Config. What happend, if you try to start the Interface manual?
     
  10. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,198
    Likes Received:
    102
    sometimes the dmesg messages don't contain the interface-name (actually in that case IIRC the rename to predictable names is one of the last things that happens) - please post the complete output of dmesg (or at least some more context)
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  11. tuantv

    tuantv New Member

    Joined:
    Jan 5, 2019
    Messages:
    15
    Likes Received:
    0
    Please see info bellow for help.
    Code:
    auto lo
    iface lo inet loopback
    
    auto eno2
    iface eno2 inet static
            address  192.168.108.13
            netmask  255.255.255.0
    
    auto enp4s0f1
    iface enp4s0f1 inet static
            address  10.200.23.254
            netmask  255.255.0.0
            hwaddress ether 00:1b:21:8a:c6:79
    
    auto eno1
    iface eno1 inet static
            address  103.254.12.23
            netmask  255.255.255.128
            gateway  103.254.12.1
            post-up echo 1 > /proc/sys/net/ipv4/ip_forward
            post-up echo 1 > /proc/sys/net/ipv4/conf/eno1/proxy_arp
    
    
    auto vmbr0
    iface vmbr0 inet static
            address  103.254.13.93
            netmask  255.255.255.128
            bridge-ports none
            bridge-stp off
            bridge-fd 0
    
    auto vmbr1
    iface vmbr1 inet static
            address  103.254.14.93
            netmask  255.255.255.128
            bridge-ports none
            bridge-stp off
            bridge-fd 0
    
    root@node03:~# ifdown enp4s0f1
    root@node03:~# ifup enp4s0f1
    root@node03:~# ip r
    default via 103.254.12.1 dev eno1 onlink
    10.200.0.0/16 dev enp4s0f1 proto kernel scope link src 10.200.23.254 linkdown
    103.254.12.0/25 dev eno1 proto kernel scope link src 103.254.12.23
    103.254.13.0/25 dev vmbr0 proto kernel scope link src 103.254.13.93
    103.254.14.0/25 dev vmbr1 proto kernel scope link src 103.254.14.93
    192.168.108.0/24 dev eno2 proto kernel scope link src 192.168.108.13
    
     
  12. tuantv

    tuantv New Member

    Joined:
    Jan 5, 2019
    Messages:
    15
    Likes Received:
    0

    Attached Files:

  13. Stoiko Ivanov

    Stoiko Ivanov Proxmox Staff Member
    Staff Member

    Joined:
    May 2, 2018
    Messages:
    1,198
    Likes Received:
    102
    hmm - nothing specific in the dmesg (basically looks like the NIC simply isn't plugged in):
    * where is the nic plugged in? - does the switch/other side detect a link?
    * is the other sided configured to have the interface up?

    from the /etc/network/interfaces you posted:
    * why do you need the `hwaddress ether` line for the NIC? - do you get a link if you remove that line?

    `ethtool` can yield some helpful information - try to see if you can find an issue with it:
    * ethtool $ifname
    * ethtool -S $ifname
    * ethtool -i $ifname
    * ethtool --phy-statistics $ifname
    should get you started

    if all above doesn't resolve the issue:
    the ixgbe module has a debug-parameter - maybe load the module with debug enabled - it might show some information

    hope this helps!
     
    Stop hovering to collapse... Click to collapse... Hover to expand... Click to expand...
  14. tuantv

    tuantv New Member

    Joined:
    Jan 5, 2019
    Messages:
    15
    Likes Received:
    0
    * NIC is attached via PCI slot
    * I have added hwaddress ether` line to the NIC, delete this line, the nic will still report linkdown.
    See me for info as below with ethtool.
    I have implemented 4 proxmox builds with 4 different 10Gb cards of this intel and all have linkdown errors. Can you support us via ssh? If possible, we can pay for support and subscribe to your products.
    Code:
    root@node03:~# ethtool enp4s0f1
    Settings for enp4s0f1:
            Supported ports: [ FIBRE ]
            Supported link modes:   10000baseT/Full
            Supported pause frame use: Symmetric
            Supports auto-negotiation: No
            Advertised link modes:  10000baseT/Full
            Advertised pause frame use: Symmetric
            Advertised auto-negotiation: No
            Speed: Unknown!
            Duplex: Unknown! (255)
            Port: Other
            PHYAD: 0
            Transceiver: internal
            Auto-negotiation: off
            Supports Wake-on: d
            Wake-on: d
            Current message level: 0x00000007 (7)
                                   drv probe link
            Link detected: no
    root@node03:~# ethtool -S         Link detected: no
    ethtool: bad command line argument(s)
    For more information run ethtool -h
    root@node03:~# ethtool -S enp4s0f1
    NIC statistics:
         rx_packets: 0
         tx_packets: 0
         rx_bytes: 0
         tx_bytes: 0
         rx_pkts_nic: 0
         tx_pkts_nic: 0
         rx_bytes_nic: 0
         tx_bytes_nic: 0
         lsc_int: 0
         tx_busy: 0
         non_eop_descs: 0
         rx_errors: 0
         tx_errors: 0
         rx_dropped: 0
         tx_dropped: 0
         multicast: 0
         broadcast: 0
         rx_no_buffer_count: 0
         collisions: 0
         rx_over_errors: 0
         rx_crc_errors: 0
         rx_frame_errors: 0
         hw_rsc_aggregated: 0
         hw_rsc_flushed: 0
         fdir_match: 0
         fdir_miss: 0
         fdir_overflow: 0
         rx_fifo_errors: 0
         rx_missed_errors: 0
         tx_aborted_errors: 0
         tx_carrier_errors: 0
         tx_fifo_errors: 0
         tx_heartbeat_errors: 0
         tx_timeout_count: 0
         tx_restart_queue: 0
         rx_long_length_errors: 0
         rx_short_length_errors: 0
         tx_flow_control_xon: 0
         rx_flow_control_xon: 0
         tx_flow_control_xoff: 0
         rx_flow_control_xoff: 0
         rx_csum_offload_errors: 0
         alloc_rx_page: 49056
         alloc_rx_page_failed: 0
         alloc_rx_buff_failed: 0
         rx_no_dma_resources: 0
         os2bmc_rx_by_bmc: 0
         os2bmc_tx_by_bmc: 0
         os2bmc_tx_by_host: 0
         os2bmc_rx_by_host: 0
         tx_hwtstamp_timeouts: 0
         tx_hwtstamp_skipped: 0
         rx_hwtstamp_cleared: 0
         fcoe_bad_fccrc: 0
         rx_fcoe_dropped: 0
         rx_fcoe_packets: 0
         rx_fcoe_dwords: 0
         fcoe_noddp: 0
         fcoe_noddp_ext_buff: 0
         tx_fcoe_packets: 0
         tx_fcoe_dwords: 0
         tx_queue_0_packets: 0
         tx_queue_0_bytes: 0
         tx_queue_1_packets: 0
         tx_queue_1_bytes: 0
         tx_queue_2_packets: 0
         tx_queue_2_bytes: 0
         tx_queue_3_packets: 0
         tx_queue_3_bytes: 0
         tx_queue_4_packets: 0
         tx_queue_4_bytes: 0
         tx_queue_5_packets: 0
         tx_queue_5_bytes: 0
         tx_queue_6_packets: 0
         tx_queue_6_bytes: 0
         tx_queue_7_packets: 0
         tx_queue_7_bytes: 0
         tx_queue_8_packets: 0
         tx_queue_8_bytes: 0
         tx_queue_9_packets: 0
         tx_queue_9_bytes: 0
         tx_queue_10_packets: 0
         tx_queue_10_bytes: 0
         tx_queue_11_packets: 0
         tx_queue_11_bytes: 0
         tx_queue_12_packets: 0
         tx_queue_12_bytes: 0
         tx_queue_13_packets: 0
         tx_queue_13_bytes: 0
         tx_queue_14_packets: 0
         tx_queue_14_bytes: 0
         tx_queue_15_packets: 0
         tx_queue_15_bytes: 0
         tx_queue_16_packets: 0
         tx_queue_16_bytes: 0
         tx_queue_17_packets: 0
         tx_queue_17_bytes: 0
         tx_queue_18_packets: 0
         tx_queue_18_bytes: 0
         tx_queue_19_packets: 0
         tx_queue_19_bytes: 0
         tx_queue_20_packets: 0
         tx_queue_20_bytes: 0
         tx_queue_21_packets: 0
         tx_queue_21_bytes: 0
         tx_queue_22_packets: 0
         tx_queue_22_bytes: 0
         tx_queue_23_packets: 0
         tx_queue_23_bytes: 0
         tx_queue_24_packets: 0
         tx_queue_24_bytes: 0
         tx_queue_25_packets: 0
         tx_queue_25_bytes: 0
         tx_queue_26_packets: 0
         tx_queue_26_bytes: 0
         tx_queue_27_packets: 0
         tx_queue_27_bytes: 0
         tx_queue_28_packets: 0
         tx_queue_28_bytes: 0
         tx_queue_29_packets: 0
         tx_queue_29_bytes: 0
         tx_queue_30_packets: 0
         tx_queue_30_bytes: 0
         tx_queue_31_packets: 0
         tx_queue_31_bytes: 0
         tx_queue_32_packets: 0
         tx_queue_32_bytes: 0
         tx_queue_33_packets: 0
         tx_queue_33_bytes: 0
         tx_queue_34_packets: 0
         tx_queue_34_bytes: 0
         tx_queue_35_packets: 0
         tx_queue_35_bytes: 0
         tx_queue_36_packets: 0
         tx_queue_36_bytes: 0
         tx_queue_37_packets: 0
         tx_queue_37_bytes: 0
         tx_queue_38_packets: 0
         tx_queue_38_bytes: 0
         tx_queue_39_packets: 0
         tx_queue_39_bytes: 0
         tx_queue_40_packets: 0
         tx_queue_40_bytes: 0
         tx_queue_41_packets: 0
         tx_queue_41_bytes: 0
         tx_queue_42_packets: 0
         tx_queue_42_bytes: 0
         tx_queue_43_packets: 0
         tx_queue_43_bytes: 0
         tx_queue_44_packets: 0
         tx_queue_44_bytes: 0
         tx_queue_45_packets: 0
         tx_queue_45_bytes: 0
         tx_queue_46_packets: 0
         tx_queue_46_bytes: 0
         tx_queue_47_packets: 0
         tx_queue_47_bytes: 0
         tx_queue_48_packets: 0
         tx_queue_48_bytes: 0
         tx_queue_49_packets: 0
         tx_queue_49_bytes: 0
         tx_queue_50_packets: 0
         tx_queue_50_bytes: 0
         tx_queue_51_packets: 0
         tx_queue_51_bytes: 0
         tx_queue_52_packets: 0
         tx_queue_52_bytes: 0
         tx_queue_53_packets: 0
         tx_queue_53_bytes: 0
         tx_queue_54_packets: 0
         tx_queue_54_bytes: 0
         tx_queue_55_packets: 0
         tx_queue_55_bytes: 0
         tx_queue_56_packets: 0
         tx_queue_56_bytes: 0
         tx_queue_57_packets: 0
         tx_queue_57_bytes: 0
         tx_queue_58_packets: 0
         tx_queue_58_bytes: 0
         tx_queue_59_packets: 0
         tx_queue_59_bytes: 0
         tx_queue_60_packets: 0
         tx_queue_60_bytes: 0
         tx_queue_61_packets: 0
         tx_queue_61_bytes: 0
         tx_queue_62_packets: 0
         tx_queue_62_bytes: 0
         tx_queue_63_packets: 0
         tx_queue_63_bytes: 0
         rx_queue_0_packets: 0
         rx_queue_0_bytes: 0
         rx_queue_1_packets: 0
         rx_queue_1_bytes: 0
         rx_queue_2_packets: 0
         rx_queue_2_bytes: 0
         rx_queue_3_packets: 0
         rx_queue_3_bytes: 0
         rx_queue_4_packets: 0
         rx_queue_4_bytes: 0
         rx_queue_5_packets: 0
         rx_queue_5_bytes: 0
         rx_queue_6_packets: 0
         rx_queue_6_bytes: 0
         rx_queue_7_packets: 0
         rx_queue_7_bytes: 0
         rx_queue_8_packets: 0
         rx_queue_8_bytes: 0
         rx_queue_9_packets: 0
         rx_queue_9_bytes: 0
         rx_queue_10_packets: 0
         rx_queue_10_bytes: 0
         rx_queue_11_packets: 0
         rx_queue_11_bytes: 0
         rx_queue_12_packets: 0
         rx_queue_12_bytes: 0
         rx_queue_13_packets: 0
         rx_queue_13_bytes: 0
         rx_queue_14_packets: 0
         rx_queue_14_bytes: 0
         rx_queue_15_packets: 0
         rx_queue_15_bytes: 0
         rx_queue_16_packets: 0
         rx_queue_16_bytes: 0
         rx_queue_17_packets: 0
         rx_queue_17_bytes: 0
         rx_queue_18_packets: 0
         rx_queue_18_bytes: 0
         rx_queue_19_packets: 0
         rx_queue_19_bytes: 0
         rx_queue_20_packets: 0
         rx_queue_20_bytes: 0
         rx_queue_21_packets: 0
         rx_queue_21_bytes: 0
         rx_queue_22_packets: 0
         rx_queue_22_bytes: 0
         rx_queue_23_packets: 0
         rx_queue_23_bytes: 0
         rx_queue_24_packets: 0
         rx_queue_24_bytes: 0
         rx_queue_25_packets: 0
         rx_queue_25_bytes: 0
         rx_queue_26_packets: 0
         rx_queue_26_bytes: 0
         rx_queue_27_packets: 0
         rx_queue_27_bytes: 0
         rx_queue_28_packets: 0
         rx_queue_28_bytes: 0
         rx_queue_29_packets: 0
         rx_queue_29_bytes: 0
         rx_queue_30_packets: 0
         rx_queue_30_bytes: 0
         rx_queue_31_packets: 0
         rx_queue_31_bytes: 0
         rx_queue_32_packets: 0
         rx_queue_32_bytes: 0
         rx_queue_33_packets: 0
         rx_queue_33_bytes: 0
         rx_queue_34_packets: 0
         rx_queue_34_bytes: 0
         rx_queue_35_packets: 0
         rx_queue_35_bytes: 0
         rx_queue_36_packets: 0
         rx_queue_36_bytes: 0
         rx_queue_37_packets: 0
         rx_queue_37_bytes: 0
         rx_queue_38_packets: 0
         rx_queue_38_bytes: 0
         rx_queue_39_packets: 0
         rx_queue_39_bytes: 0
         rx_queue_40_packets: 0
         rx_queue_40_bytes: 0
         rx_queue_41_packets: 0
         rx_queue_41_bytes: 0
         rx_queue_42_packets: 0
         rx_queue_42_bytes: 0
         rx_queue_43_packets: 0
         rx_queue_43_bytes: 0
         rx_queue_44_packets: 0
         rx_queue_44_bytes: 0
         rx_queue_45_packets: 0
         rx_queue_45_bytes: 0
         rx_queue_46_packets: 0
         rx_queue_46_bytes: 0
         rx_queue_47_packets: 0
         rx_queue_47_bytes: 0
         rx_queue_48_packets: 0
         rx_queue_48_bytes: 0
         rx_queue_49_packets: 0
         rx_queue_49_bytes: 0
         rx_queue_50_packets: 0
         rx_queue_50_bytes: 0
         rx_queue_51_packets: 0
         rx_queue_51_bytes: 0
         rx_queue_52_packets: 0
         rx_queue_52_bytes: 0
         rx_queue_53_packets: 0
         rx_queue_53_bytes: 0
         rx_queue_54_packets: 0
         rx_queue_54_bytes: 0
         rx_queue_55_packets: 0
         rx_queue_55_bytes: 0
         rx_queue_56_packets: 0
         rx_queue_56_bytes: 0
         rx_queue_57_packets: 0
         rx_queue_57_bytes: 0
         rx_queue_58_packets: 0
         rx_queue_58_bytes: 0
         rx_queue_59_packets: 0
         rx_queue_59_bytes: 0
         rx_queue_60_packets: 0
         rx_queue_60_bytes: 0
         rx_queue_61_packets: 0
         rx_queue_61_bytes: 0
         rx_queue_62_packets: 0
         rx_queue_62_bytes: 0
         rx_queue_63_packets: 0
         rx_queue_63_bytes: 0
         tx_pb_0_pxon: 0
         tx_pb_0_pxoff: 0
         tx_pb_1_pxon: 0
         tx_pb_1_pxoff: 0
         tx_pb_2_pxon: 0
         tx_pb_2_pxoff: 0
         tx_pb_3_pxon: 0
         tx_pb_3_pxoff: 0
         tx_pb_4_pxon: 0
         tx_pb_4_pxoff: 0
         tx_pb_5_pxon: 0
         tx_pb_5_pxoff: 0
         tx_pb_6_pxon: 0
         tx_pb_6_pxoff: 0
         tx_pb_7_pxon: 0
         tx_pb_7_pxoff: 0
         rx_pb_0_pxon: 0
         rx_pb_0_pxoff: 0
         rx_pb_1_pxon: 0
         rx_pb_1_pxoff: 0
         rx_pb_2_pxon: 0
         rx_pb_2_pxoff: 0
         rx_pb_3_pxon: 0
         rx_pb_3_pxoff: 0
         rx_pb_4_pxon: 0
         rx_pb_4_pxoff: 0
         rx_pb_5_pxon: 0
         rx_pb_5_pxoff: 0
         rx_pb_6_pxon: 0
         rx_pb_6_pxoff: 0
         rx_pb_7_pxon: 0
         rx_pb_7_pxoff: 0
    root@node03:~#
    root@node03:~# ethtool -i enp4s0f1
    driver: ixgbe
    version: 5.1.0-k
    firmware-version: 0x30030001
    expansion-rom-version:
    bus-info: 0000:04:00.1
    supports-statistics: yes
    supports-test: yes
    supports-eeprom-access: yes
    supports-register-dump: yes
    supports-priv-flags: yes
    root@node03:~# ethtool --phy-stats enp4s0f1
    ethtool: bad command line argument(s)
    For more information run ethtool -h
    root@node03:~# ethtool -h
    ethtool version 4.8
    Usage:
    
     
  15. alexskysilk

    alexskysilk Active Member

    Joined:
    Oct 16, 2015
    Messages:
    556
    Likes Received:
    58
    Just out of curiousity, have you ever used these nics with the cables/switch that you have them connected to now? I'd go back and double check the known working config.
     
  16. tuantv

    tuantv New Member

    Joined:
    Jan 5, 2019
    Messages:
    15
    Likes Received:
    0
    All of them include, lights, cab cables, ... physical connections are working normally on CentOS 7.x, especially for proxmox, it doesn't work. Please give me advice and direction.
     
  17. elurex

    elurex Member
    Proxmox Subscriber

    Joined:
    Oct 28, 2015
    Messages:
    149
    Likes Received:
    4
    you use dac or transceiver ?

    If you are using transceiver,
    you may try allow_unsupported_sfp=1 option for the ixgbe driver.
     
    #17 elurex, Jan 14, 2019
    Last edited: Jan 14, 2019
  18. tuantv

    tuantv New Member

    Joined:
    Jan 5, 2019
    Messages:
    15
    Likes Received:
    0
    @elurex When I add allow_unsupported_sfp = 1 option, the 10Gb card is not displayed. Do you have a better solution please advise me.
     
  19. czechsys

    czechsys Member

    Joined:
    Nov 18, 2015
    Messages:
    139
    Likes Received:
    3
    1] try different kernel (even standard debian kernel, not just pve)
    2] check firmware
     
  20. elurex

    elurex Member
    Proxmox Subscriber

    Joined:
    Oct 28, 2015
    Messages:
    149
    Likes Received:
    4
    @tuantv

    here is my dmesg on my ixgbe device

    Code:
    [    2.046280] ixgbe 0000:05:00.0 enp5s0f0: renamed from eth1
    [    7.369699] ixgbe 0000:05:00.0 enp5s0f0: changing MTU from 1500 to 9000
    [    7.543537] ixgbe 0000:05:00.0: registered PHC device on enp5s0f0
    [    7.648574] IPv6: ADDRCONF(NETDEV_UP): enp5s0f0: link is not ready
    [    7.716031] ixgbe 0000:05:00.0 enp5s0f0: detected SFP+: 3
    [    7.964053] ixgbe 0000:05:00.0 enp5s0f0: NIC Link is Up 10 Gbps, Flow Control: RX/TX
    [    7.964169] IPv6: ADDRCONF(NETDEV_CHANGE): enp5s0f0: link becomes ready
    I lookup your log and your SFP+ were never detected and that should be the direction you need to look at.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice