problems with: new pve-2.6.24.11, intel CPUs, and CentOS 5.4 i686 KVM guests

dweeb

New Member
May 28, 2010
7
0
1
Ok, I'm new to PVE.

I have a VMware shop and wanted to move 'everything' to a pve-cluster.

I built and tested pve-2.6.24.10 (v1.5) on an AMD PII Quad-Core desktop computer.

All guests seemed to work fine... so I build a 6x node pve-cluster using my Intel Xeon servers.

All of my Windows 2003 Server KVM guests worked fine; but NOT my CentOS i686 guests.
My CentOS guests get 'Stuck CPU errors'; Memory leaks; complete lock-ups/freezes.
I built a Mandriva MES 5.1 KVM guest on my AMD box and then tried to 'run' it on my intel based pve-cluster - but only my xeon 5205 CPUs could boot it.

This week, I gladly upgraded One of my intel PVE-cluster nodes to Martins' latest kernel (pve-2.6.24.11 & related pkgs).

Now none of my CentOS guests can boot on this node; so I tried pve-2.6.32 - same thing. So I had to push my CentOS guest back to other nodes in the pve-cluster (pve-2.6.24.10).

So, what's going on - is it these crappy Intel Xeons - or - am I just a newbie doing something 'wrong'?

I will post any/all sysconfig details to any responders.

BFN
 
Last edited:
Hi again,

So, I finally managed to troll through most of the forum threads; and to my surprise 'most' of issues I'm facing seem to be related to my intel cpu's and CentOS.

None of the threads were explicit about the how to 'fix' any of these issues - but there were enough vague posters that said things like: 'mine works now'; 'I turned off Hyper-Threading'; 'Try upgrading to pve-2.6.32' etc.

Here's what I did to get it working:

1) I turned off Hyper-Threading - not sure if this helped but a lot of older posts talked about this
2) Converted the 'VMware' vmdk files to qcow2 - again, not sure that this really did much to fix things
3) Changed the CPU/Cores from '2' to '1' - this seems to be the real 'fix' for my CentOS KVM guests

Now that my CentOS KVMs are running (stable) - I going to build a test-dummy CentOS KVM and sequentially test my list of fixes to see which ones are really a 'fix'; i.e. I'll try turning hyper-threading back on; try using the vmdk format instead of qcow2; and just to be sure I will repeat all of my 'testing' on my AMD PII quad-core box.

If anyone else has some tips/tests results you want to add to my comments pls do.

It would be nice if we/forum users/proxmox support could boil-down all of these 'fixes' and issues (for running CentOS/Linux KVMs on intel cpus) into a wiki pages to warning dweebs like me about what configs will and will not work for any given type of KVM guest and/or HW...

BFN
 
Last edited:
Ok,

My test 'results' - the only thing that made it work was #3 Changing from 2/more Cores Sockets to 1 Core/Socket; on my intel proxmox servers (I have not tested my AMD Box).

BFN