Intel 82574L NIC not working.

dakota_winds

New Member
Jul 25, 2012
18
0
1
Hi,

I have two identical systems, Asus P6X58-E WS that have two Intel 82574L NIC cards that refuse to function properly. I have just installed and upgraded to the latest Proxmox VE 2.1-12. Running kernel 2.6.32-12-pve.
I have to run /etc/init.d/netwrorking stop and then /etc/init.d/networking start to get them to function. This does not always work. Then I adjust the setting in /etc/network/interfaces and change from eth0 to eth1
and it works until reboot. After reboot, back to the same behavior, NICs not working. I have tested each NIC with a Live CD of Ubuntu 10.04 and they do function correctly. What must I do do get these Intel NICs to
function properly?

Thank you,
dakota_winds
 
I have two identical systems, Asus P6X58-E WS that have two Intel 82574L NIC cards that refuse to function properly. I have just installed and upgraded to the latest Proxmox VE 2.1-12. Running kernel 2.6.32-12-pve.

Latest is 2.6.32-13-pve
 
...don't know if this might be related, but there is/was an issue with Advanced Power Management Feature of PCI(e) devices in some chipsets and intel NIC (incl. 82574L).
Try to disable this feature in the BIOS (ASPM or similar) of you mobo.
 
Thanks for the suggestions, but I found no ASPM in the BIOS. I did however disable the ACPI and this too did not work. Curious thing is I can change the configuration in the /etc/network/interfaces file and it works
after I run /etc/init.d/networking stop/start. Once I reboot it's back to not working.
 
Thanks for the suggestions, but I found no ASPM in the BIOS. I did however disable the ACPI and this too did not work. Curious thing is I can change the configuration in the /etc/network/interfaces file and it works
after I run /etc/init.d/networking stop/start. Once I reboot it's back to not working.
Hi,
what do you change in /etc/network/interfaces that it's works after that?

Do you change also network-settings from the gui? In this case this settings are overwrite the interfaces-file during boot.

Have you take a look for the device-ordering in /etc/udev/rules.d/70-persistent-net.rules?

Udo
 
I just comment out the vmbr0 and add setting for eth0 or eth1 and put the patch cable in the appropriate port and run the /etc/init.d/networking stop/start. All done via local console.
Have not checked the device ordering in said location.
 
I changed the order in /etc/udev/rules.d/70-persistent-net.rules and now it's working! Thanks for the suggestion. Now I will reinstall with the correct NIC used as my primary.
At least I hope this will work. I will be testing DRBD. We would like to move all our system over to VMs. Thanks again udo!!
 
I changed the order in /etc/udev/rules.d/70-persistent-net.rules and now it's working! Thanks for the suggestion. Now I will reinstall with the correct NIC used as my primary.
At least I hope this will work. I will be testing DRBD. We would like to move all our system over to VMs. Thanks again udo!!
Hi,
reinstallation is normaly not nessesary - because the nic-ordering is not chooseable. It's normal doing, to change the udev-rule or use the right device in the interface-file.

Udo
 
Hi Udo,

I did have to reinstall on both systems. One actually worked when I changed the order in /etc/udev/rules.d/70-persistent-net.rules and one system did not.

Thanks,
dakota_winds
 
Hi Udo,

I did have to reinstall on both systems. One actually worked when I changed the order in /etc/udev/rules.d/70-persistent-net.rules and one system did not.

Thanks,
dakota_winds
Hi,
one worked and one not - with identical systems??
Then must be something wrong...

Please post the output of following commands from both nodes (working and non-working):
Code:
dmesg | grep eth
brctl show
cat /etc/network/interfaces
ifconfig -a
Udo
 
Hi Udo,

I reinstalled on both systems already and all is well. Currently in the process of syncing the DRBD systems. The DRBD is taking a extremely long time to sync. Should be expected. These are test machines, I intend to use
SuperMicro systems for production.

Thanks,
dakota_winds
 
Hi Udo,

I reinstalled on both systems already and all is well. Currently in the process of syncing the DRBD systems. The DRBD is taking a extremely long time to sync. Should be expected.
for drbd is an fast network very usefull (10GB Ethernet or Infiniband...). But with the right configuration you can also live with an 1GB-connection (but not very fast of course)
These are test machines, I intend to use SuperMicro systems for production.

Thanks,
dakota_winds
hmm, supermicro? Than i wish you good luck (i have some supermicro boards). imho it's a strange company - e.g. they wrote on they homepage "only update the bios if you be sure that you have an bios-related issue", but they don't wrote on the bios-download which issues are corrected nor what date the bios have... this have nothing to do with production-safe systems!

Udo
 
Hi Udo,

I am using a direct connection between NICs on each system. No switch. Problem is it's taking a very long time. They are Gigabit capable NICs and yet it's been 24 hours and only 11% done, with only a 1TB drive. Can I stop it and try another
cable?

Thanks,
dakota_winds
 
Hi Udo,

I am using a direct connection between NICs on each system. No switch. Problem is it's taking a very long time. They are Gigabit capable NICs and yet it's been 24 hours and only 11% done, with only a 1TB drive. Can I stop it and try another
cable?

Thanks,
dakota_winds
Hi,
I guess it's not an cable fault. Simply stop drbd (like "drbdadm down r0" and r1 if you have two ressources) and check with iperf the networkperformance in both directions like this:
Code:
# on both nodes "apt-get install iperf"
# on node a
iperf -s

# on node b
iperf -c IP.OF.NODE.A

#and vice versa
you wrote it's an 1TB drive - i guess an single drive. With two partitions for two resources or only one partition? A fast IO-system (raid-controller with fast disks) are allways an good idea ;-)

For performance enhancement you can disable encryption if you use an cross-over cable.

Udo
 
I should have been more specific. Both systems are identical. The drives for DRBD are 1TB. They are slow 7200RPM SATA drives just for testing purposes (no RAID controllers connected). I see that the encryption is active and that may be
a significant part of the slowness issue. I will be leaving it as is for now. I have to get to other systems that need attention. These can run over the weekend and I hope it finishes. Thanks for the info on network performance testing. I will
try that.

Thanks,
dakota_winds
 
Hi Udo,

After some time and frustration, drbd is working. I wanted to migrate the vm I created to the other node. When I choose the vm and click migrate it only shows the node it's already on. Error "target: target is local node". When attempting to choose
the other node it's not available. Thanks for your time.

dakota_winds
 
Hi Udo,

After some time and frustration, drbd is working. I wanted to migrate the vm I created to the other node. When I choose the vm and click migrate it only shows the node it's already on. Error "target: target is local node". When attempting to choose
the other node it's not available. Thanks for your time.

dakota_winds
Hi,
you must marked your lvm-drbd-storage as shared (and accessible from only 2 nodes, if you have more then two).

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!