Nehalem Xeon E5520 and Intel S5520UR

nick

Renowned Member
Mar 28, 2007
364
1
83
Hi All,

someone worh with Nehalem Xeon E5520 and Intel S5520UR? I intend to buy a server with this componentes...and I want to know if everything it's OK.

PVE know all nehalem features? HT?
 
I am running the latest proxmox on an Intel SR2600UR with two Xeon E5520. It shows up as 16 cores in proxmox so the HT should be supported correctly.

Proxmox installs without any trouble. However I have had some trouble with kvm machines hanging or going into pause randomly (another thread on this forum). It seemed related to multicore kvm's. Single core KVMs have been running fine. Unfortunately I have not had time to dig deeper into whether there is a problem or not. At the moment we mostly use OpenVZ machines which run very stable.
 
Last edited:
I am running the latest proxmox on an Intel SR2600UR with two Xeon E5520. It shows up as 16 cores in proxmox so the HT should be supported correctly.

Proxmox installs without any trouble. However I have had some trouble with kvm machines hanging or going into pause randomly (another thread on this forum). It seemed related to multicore kvm's. Single core KVMs have been running fine. Unfortunately I have not had time to dig deeper into whether there is a problem or not. At the moment we mostly use OpenVZ machines which run very stable.

I have 2x Xeon 5520's as well and see the same issues. I've got one CentOS KVM that performs "oddly" at times and one that just hangs once every 2 days or so. Cannot track down anything substantial yet, once they reboot they are fine for a few days. Both of mine are single core with 4GB of ram each.
 
I have some old servers with Xeon X3220 and I run KVM (CentOS5) with 2 CPU with no problem.

I don't know what will happen with the new Xeon 5220 - I'm a little bit worry about what will happen on the new server...
 
Apparently it does not narrow down to Nehalem based machines:
http://sourceforge.net/tracker/?func=detail&atid=893831&aid=2351676&group_id=180599

Someone writes about kernel 2.6.31-rc6 solving the problem. Are the 2.6.31 KVM patches backported to Proxmox 1.4? If someone is not running production on the server yet it could be possible to try out a newer kernel without openvz support?


See also:
KVM machines hanging
http://www.proxmox.com/forum/showthread.php?p=14066#post14066

Ver 1.4 and Windows 2008 Std Edition + SMP
http://www.proxmox.com/forum/showthread.php?t=2581
 
Is there a safe way to downgrade again if it does not work correctly? The server is used for production now, but we are only dependent on openVZ machines.

you can choose the kernel on the boot prompt (grub), so it should be safe to test the new kernel.
 
I loaded the new kernel and it seems ok so far. I'll try put some stress on it tomorrow.

I noticed the following when starting OpenVZ machines:
Code:
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(80041272){t:12;sz:4} arg(bfee8798) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(00001260){t:12;sz:0} arg(bfee87a0) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(801c0204){t:02;sz:28} arg(bfee877c) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(80041272){t:12;sz:4} arg(bfee8798) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(00001260){t:12;sz:0} arg(bfee87a0) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(801c0204){t:02;sz:28} arg(bfee877c) on /

But it also showed up in the previous 2.6.24-8 kernel. I'll post the results of stress testing as soon as possible.
 
I loaded the new kernel and it seems ok so far. I'll try put some stress on it tomorrow.

I noticed the following when starting OpenVZ machines:
Code:
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(80041272){t:12;sz:4} arg(bfee8798) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(00001260){t:12;sz:0} arg(bfee87a0) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(801c0204){t:02;sz:28} arg(bfee877c) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(80041272){t:12;sz:4} arg(bfee8798) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(00001260){t:12;sz:0} arg(bfee87a0) on /
Nov 17 18:16:03 jango kernel: ioctl32(mount:5639): Unknown cmd fd(3) cmd(801c0204){t:02;sz:28} arg(bfee877c) on /
But it also showed up in the previous 2.6.24-8 kernel. I'll post the results of stress testing as soon as possible.

you can ignore this well known but harmless bug.
 
Hi Tom,

And about my problem with live migration AMD 2 cpus in guest ? I tested this new kernel and the problem still the same.

this thread is about nehalem (see subject), we should discuss AMD issue on a new one to keep the forum clean (but I have no quick solution now).
 
First test results:
VMs:
2x Ubuntu 8.04 server i386
2GB RAM, IDE, raw-file, rtl8139, 1 socket, 4 cores

So far so good, 10 hours of intensive compiling on both machines without any trouble. Continuing the tests for the rest of the week. I'll raise the load by adding two extra machines when we are not using the server during the weekend.
 
Got dual Intel(R) Xeon(R) CPU E5506 @ 2.13GHz server, centos hangs with CPU stuck 2-4 time per hour.
 
First test results:
VMs:
2x Ubuntu 8.04 server i386
2GB RAM, IDE, raw-file, rtl8139, 1 socket, 4 cores

So far so good, 10 hours of intensive compiling on both machines without any trouble. Continuing the tests for the rest of the week. I'll raise the load by adding two extra machines when we are not using the server during the weekend.


Promising!!! I have a cluster with 5540 too, and have the same trouble a 3,4 months ago( PVE 1.3). I just disable HT... with all its ok with your tests i will turn on again (its a production environment i can not test in there).