We have been grappling with getting our Hyperconverged Ceph cluster moved over from 10Gb copper to 25Gb optical fiber for months now. We have been entirely unsuccessfully to this point. Is there a hardware compatibility list to find out about whether the Network cards for 25Gb we are trying to use are compatible with Proxmox 6.x?
Is it possible the Cards we are using will be compatible on Proxmox 7.x with the newer kernel?
The cards we are using for 25Gb are as follows:
Nvidia chipset based - Mellanox Connect X4 25Gb MCX-4121A-ACAT
these have not worked for us at all so far and we have found a bug in the Proxmox networking stack with them where using ifupdown2 IP addresses assigned are not removed from the previous 10Gb and so the same addresses end up on both 10Gb and 25Gb after clicking on "Apply configuration" in the Proxmox GUI
We also purchased Intel 25Gb fiber NICs to test and those seem to work but we would like to know if they are officially proven to work for either Proxmox 6.x or 7.x or if both even well support these NICs.
The Intel NICs are as follows:
Intel XXV710 Dual Port 25GbE SFP28/SFP+ PCIe Adapter
These were able to install in 2 nodes and copy a VM backup between them over 25Gb as a test .. this was over SCP so wasn't particularly fast but never-the-less, worked.
Can someone from Proxmox please confirm that one of these models is indeed well supported and if there is something extra needed to be done to make the Mellanox cards work properly or if we CAN use Mellanox but that a different model card is supported vs the above mentioned model?
Thanks in advance,
CTC
Is it possible the Cards we are using will be compatible on Proxmox 7.x with the newer kernel?
The cards we are using for 25Gb are as follows:
Nvidia chipset based - Mellanox Connect X4 25Gb MCX-4121A-ACAT
these have not worked for us at all so far and we have found a bug in the Proxmox networking stack with them where using ifupdown2 IP addresses assigned are not removed from the previous 10Gb and so the same addresses end up on both 10Gb and 25Gb after clicking on "Apply configuration" in the Proxmox GUI
We also purchased Intel 25Gb fiber NICs to test and those seem to work but we would like to know if they are officially proven to work for either Proxmox 6.x or 7.x or if both even well support these NICs.
The Intel NICs are as follows:
Intel XXV710 Dual Port 25GbE SFP28/SFP+ PCIe Adapter
These were able to install in 2 nodes and copy a VM backup between them over 25Gb as a test .. this was over SCP so wasn't particularly fast but never-the-less, worked.
Can someone from Proxmox please confirm that one of these models is indeed well supported and if there is something extra needed to be done to make the Mellanox cards work properly or if we CAN use Mellanox but that a different model card is supported vs the above mentioned model?
Thanks in advance,
CTC