Drivers

adamb

Famous Member
Mar 1, 2012
1,322
72
113
I have pinned down that the broadcom 10GB drivers (bnx2x) coming default with proxmox are no good. The current version on proxmox is 1.72.00. With these drivers it will cause kernel panics on heavy transfers or DRBD sync. I have found 1.72.18 to be perfect. What needs to be done in order to ensure the latest drivers make it into the next kernel? I appreciate the help! It seems to be only effecting cards using bnx2x which are the 10GB version. I beilive 1GB are aok.
 
Last edited:
how did you install the new driver version?

I moved them into place.

/lib/modules/2.6.32-17-pve/kernel/drivers/net/bnx2x/

Then updated my initrd image.

cd /boot/
mv initrd.img-2.6.32-17-pve initrd.img-2.6.32-17-pve.bak
update-initramfs -c -k 2.6.32-17-pve
update-grub
shutdown -r now

Check for the new drivers with
ethtool -i eth0

You will need to install ethtool if you havn't allready
apt-get install ethtool
 
I moved them into place.

/lib/modules/2.6.32-17-pve/kernel/drivers/net/bnx2x/

are you using a .ko file then? how did you get the new driver file? i'm strugging to update the driivers for my broadcom NIC, i'm currently using 2.2.1 and the newest is 2.2.3e
 
i have the source but when i try to use 'make' it doesnt work... ive never done this before so im grateful for your help!
 
no i'm having issues with my NICs not running at the speeds the should do with my SAN.

When compared with Windows using MPIO i'm getting 212-220MB/s on benchmarks whereas with Proxmox i'm only getting 108MB/s but all 4 NICs are in use which shows that there is an issue with the Proxmox node - i wondered if it was driver related - hence wanting to update the drivers as there is a lot of iSCSI fixes listed in the release notes.
 
no i'm having issues with my NICs not running at the speeds the should do with my SAN.

When compared with Windows using MPIO i'm getting 212-220MB/s on benchmarks whereas with Proxmox i'm only getting 108MB/s but all 4 NICs are in use which shows that there is an issue with the Proxmox node - i wondered if it was driver related - hence wanting to update the drivers as there is a lot of iSCSI fixes listed in the release notes.

I take it you are using a bonded config? What mode are you running?
 
the multipathing is working fine - i have a SNMP monitor on the switch - each NIC only has a max of 250kbits/s Node<>SAN whereas a windows 7 box with the same NICs gets over 500 doing the same test.

I have also tested all 3 Nodes all with Broadcom 5709 quad cards and between each Node i get 950 when using iperf
 
the multipathing is working fine - i have a SNMP monitor on the switch - each NIC only has a max of 250kbits/s Node<>SAN whereas a windows 7 box with the same NICs gets over 500 doing the same test.

I have also tested all 3 Nodes all with Broadcom 5709 quad cards and between each Node i get 950 when using iperf

Are you using balance-rr?
 
No i have no bonding setup as i'm using multipathing but it does use round-robin for balancing the NIC traffic
 
Last edited:
I think you're confusing units.

"950 mbps" is about 118 mBps.

If you can't break the 1gbps barrier in iperf, don't expect to see your iscsi bandwidth break 1gbps either.
 
It's not really helpful to you if each of your four NICs will do a gigabit by themselves...you need the aggregate speed between the servers to be greater than 1gbps.

For example, in my environment (two bonded giges), I get:

[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 2.30 GBytes 1.97 Gbits/sec
 
Yea but you said 250kbits, which would not come anywhere close to 108MB/s. You mean each nic will run at 250mbps, pretty much equal to a single 1G line.

Thats the problem - individually the NICs all will transfer at 950 using iperf. But when doing IO tests on my SAN the maximum throughput i see when monitoring the ports is 250. But when i use windows 7 and the same NICs to connect to the SAN i get 550. My question is why the difference - its the same settings apart from Linux vs Windows.

I cant run bonded connections as my SAN is Active/Active controllers so i have to use Multipath to allow for the 4 backup connections
 
Thats the problem - individually the NICs all will transfer at 950 using iperf. But when doing IO tests on my SAN the maximum throughput i see when monitoring the ports is 250. But when i use windows 7 and the same NICs to connect to the SAN i get 550. My question is why the difference - its the same settings apart from Linux vs Windows.

I cant run bonded connections as my SAN is Active/Active controllers so i have to use Multipath to allow for the 4 backup connections

My best guess is a tuning/config issue. Or the drivers. I am not familiar with multipath so i can't be of to much help.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!