mellanox

  1. 10 Gigabit slow speed down to 3 Gbit . Warum ?

    Hallo an alle Netzwerk und Proxmox Experten, folgendes ist passiert: Ich habe meinen Proxmox Home Server ein stromsparendes Upgrade gegönnt. Nachdem ich diesen zusammengebaut hatte, die Mellanox Connect X2 Single Port Karte (Server) wieder mit meinem Hauptsystem Debian 11 Bullseye über ein SFP+...
  2. Unable to create Trunk port [Please Help :)]

    Hello, Really grateful for some assistance, pulling my hair out here... Setup: dell r730 with quad port broadcom nic (OEM) and a mellanox connectx-3, proxmox 7.2 VM: sophos xg with 3 vmbr (2 wan, 1 lan (trunk)). Issue: Trunk port works fine when I use one of my Dell r730 integrated quad port...
  3. Old Mellanox cards

    Ok so I have these very old, and I mean OLD Mellanox cards. they do not use SFP or SFP+ but they use a connector as CX4. Not the Mellanox CX4 cards.. but the port.. anyway... it was a cheap way for me to get 10gb in my house on my gaming rig (Win10), plex server (Win10), Storage Server (Windows...
  4. Networking problems with Mellanox Cards with newest Kernel 5.13

    Proxmoxcluster with Mellanox ConnectX-4 Lx networkcards: Worked under kernel 5.11, everything fine so far. After Upgrade to newest kernel 5.13 massive problems on networking. Impossible to dump sfp infos with ethtool, got bit errors massive problems with local ceph instance installed and so...
  5. VictorSTS

    Proxmox 7 and Mellanox ConnectX4 and vlan aware bridge

    Hello, I'm setting up a new cluster using Mellanox ConnectX5 and Connect4LX cards. The ConnectX5 have not given me any issue (yet?), but the 4LX do not work that flawlessly. After solving a very slow boot issue related to some old firmware version, now it seems that I won't be able to use...
  6. Can't passthrough Mellanox ConnectX-3 to VM

    大家好,首先,英語不是我的母語,但我會盡力描述我將要做的事情,所以如果有任何不清楚的地方,請告訴我,我會盡力解釋。 我有一台安裝了 PVE 的服務器,它可以識別 ConnectX-3 並且工作正常,但是當我嘗試將它傳遞到我的 TrueNAS VM 時,出現以下錯誤 kvm: -device vfio-pci,host=0000:01:00.0,id=hostpci0,bus=pci.0,addr=0x10: vfio 0000:01:00.0: 未能為中斷 INTX-0 設置 TRIGGER eventfd 信號: VFIO_DEVICE_SET_IRQS 失敗:設備或資源繁忙...
  7. [SOLVED] pve 6.3 vs. mellanox ofed

    Hello guys, I would like to ask one thing: we would need SR-IOV enabled on our mellanox card but pve 6.3 (debian 10.6) is not listed among supported OSes here https://www.mellanox.com/support/mlnx-ofed-matrix and pve comes only in 6.3 version AFAICS http://download.proxmox.com/iso/ ... any idea...
  8. CEPH Performance issues after upgrade from 15.2.8 to 15.2.10

    Hello, A few information about the System: Its a Hyperconverged Cluster of 5 Supermicro AS -1114S-WN10RT: 4 of the Servers have: CPU: 128 x AMD EPYC 7702P 64-Core Processor (1 Socket) RAM: 512 GB 1 of the Servers has: 64 x AMD EPYC 7502P 32-Core Processor (1 Socket) RAM: 256 GB Network: All...
  9. [SOLVED] PVE 6.3 + OVS / Linux bridge + MLX312B SR-IOV + LACP

    Hey folks, I'm fighting with network setup, not sure if this config can work, so would be nice if you can share info or your config regarding similar setup. PVE: 6.3-3 System: HPE ML350 Gen9 Network card: HPE 546SFP+ (MLX312B) - 2-port SFP+ (part number: 779793-B21) Bridging: using OVS...
  10. [SOLVED] Mellanox ConnectX-3 not detected

    Hi, I'm trying to get a Mellanox ConnectX-3 (MCX311A-XCAT, CX311A) card to work with proxmox. When using lspci the card doesn't show up at all. I've searched everywhere but cant find where to go from here.
  11. Help setting iser transport for ZFS on ISCSI

    Hi all, I'm new to Proxmox having been running VMWare since maybe 3.5 or something. Decided to switch over because I wanted to take the cheaper path to Infiniband and then found the support in VMWare not quite there. My setup now is Proxmox server with a dual-port Mellanox ConnectX-3 card. My...
  12. Benchmark: 3 node AMD EPYC 7742 64-Core, 512G RAM, 3x3 6,4TB Micron 9300 MAX NVMe

    Hi everbody, we are currently in the process of replacing our VMware ESXi NFS Netapp setup with a ProxMox Ceph configuration. We purchased 8 nodes with the following configuration: - ThomasKrenn 1HE AMD Single-CPU RA1112 - AMD EPYC 7742 (2,25 GHz, 64-Core, 256 MB) - 512 GB RAM - 2x 240GB SATA...
  13. Installation fails to start. No network adapters found. Mellanox connectx-2

    Hi, I was planning on switching from just using Ubuntu Server 18.04 to Proxmox 6.1 and virtualizing my existing Ubuntu server. I've ran in to an issue however. When trying to install Proxmox I get the error No network adapters found. The board I am using is an old z97 board with a core i5...
  14. Proxmox + Mellanox 3X 40G Poor bandwidth performance

    Hello, i'm testing the performance over two nodes connected by two Mellanox: MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0) On Proxmox 6 last version i have installed all pakages: apt-get install rdma-core libibverbs1 librdmacm1 libibmad5 libibumad3 librdmacm1 ibverbs-providers...
  15. Mellanox - iSER/Ceph

    I'm having issues getting iSER over iWARP to work properly with Proxmox (or any other OS) and Intel has been very unhelpful about getting me to a working state. I do need the added bandwidth/latency offered by RDMA; so I'm looking into alternatives. I'm making this post to get any input from...
  16. Yet another question on Mellanox ConnectX-3

    I am testing latest Proxmox in our environment and discovered that I need to switch the Mellanox ConnectX-3 into Ethernet mode for it to work. When trying to install mft tools (from the "Getting started with Mellanox Firmware tools (MFT) for Linux" tutorial over on mellanox website), I get a...
  17. [SOLVED] Infiniband Mellanox 40Gb Speed Low

    Hi, first, merry christmas guys! I just installed 2 mellanox infiniband cards (40Gb Connectx-2) each on on server and installed after that the latest mellanox drivers from here: http://www.mellanox.com/page/products_dyn?product_family=27 Finally i made a test with iperf but i get only this...
  18. [SOLVED] Is OVS offloading supported ?

    Hello, Are Mellanox ASAP² (with ConnectX4 LX / ConnectX 5) or the Netronome counterpart (with Agilio CX) supported to offload Openvswitch with Proxmox ? Thanks !
  19. virtio disk driver windows2016 rate limited ?

    Hi Folks, Just migrated ceph totally to bluestore a test with windows2016 Server has good results, but i think limiting component is virtio driver! see also https://forum.proxmox.com/threads/virtio-ethernet-driver-speed-10gbite.35881/ concering Ethernet speed ... I see no "tunables" to...
  20. ceph performance 4node all NVMe 56GBit Ethernet

    Hi Folks, Whis performanche shoud i expect for this cluster ? are my settings ok ? 4 nodes: system: Supermicro 2028U-TN24R4T+ 2 port Mellanox connect x3pro 56Gbit 4 port intel 10GigE memory: 768 GBytes CPU DUAL Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz ceph: 28 osds 24 Intel Nvme 2000GB...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!