Hi,
actually the setup of my cluster is quite simple.
I have an IMS with 2 Computee Modules, 24 GB RAM and a CPU E5620 @ 2.40GH each.
The Storage Model is the following one:
14x300GB SAS Drives total (13 used and 1 HotSpare)
The 13 Drives are seperated in 2 Drive Groups: 4 Drives for a storage pool called "Boot" and the remaining for the pool called "SAN"
Boot has a total of 1113GB and SAN 2506GB.
Boot is seperated in 2 virtual drives for Node 1 and 2 (185GB each).
SAN is not seperated and acts for Node 1 and 2 as LVM Storage.
The 2 virtual drives are drive 0 for the hosts.
The LVM Storage is drive 1 for the hosts.
Proxmox VE is installed on drive 0 on each hosts
The local drives on each node (drive 0) are not shared.
The LVM is shared.
eth0 on each host is bridged on vmbr0 with an IP in my LAN.
eth1 is in another net and connected to another SAN via NFS for Isos and Backup purposes.
Initially the IMS ran under Kernel 2.6.32 without OpenVZ support, installed via the remote console of the IMS Webinterface.
Several Windows Servers are running there for months now without any problems at all.
2 or 3 weeks ago i upgraded the Kernel to 2.6.32-4 and everything else via aptitude upgrade as instructed in the PVE Wiki
I downloaded a OpenVZ template (Debian 5 as well as 6) via the Webinterface, created a VZ (veth) and started it.
I logged in via JavaVNC Console and ran "dhclient" for a quick aptitude update.
aptitude connected to the mirrors, started downloading and in the process of doing so the VZ died and with it the host.
Not responding in any way (ssh, ping etc.)
I had to restart the whole host via the IMS Webinterface
I searched a little bit what might have caused the problem and found nothing.
Yesterday evening i updated host 1 to the latest stable Kernel of course via aptitude upgrade.
The outcome with starting a VZ was the same. It died doing an aptitude update and crashed the host.
The i migrated the KVMs (2 Windows Servers) to the second node and installed PVE on node 1 with a fresh ISO (again via the IMS remote console) and updatet the host after the installation.
I recreated the cluster, migrated 1 Windows Server back on node 1, downloaded a VZ template and so on....
The outcome was again the same...during the network traffic, caused by a simple aptitude update the VZ and the host crashed.
I dont want to mess around to any more on the IMS, because this is my production sever.
My other PVE servers, based on Xeons, but also on Phenoms and Core2Duos never experienced this kind of problem.
There OpenVZs and KVMs are running perfectly together on 1 host.
My pveversion -v on host one is:
vm1:~# pveversion -v
pve-manager: 1.6-5 (pve-manager/1.6/5261)
running kernel: 2.6.32-4-pve
proxmox-ve-2.6.32: 1.6-25
pve-kernel-2.6.32-4-pve: 2.6.32-25
qemu-server: 1.1-22
pve-firmware: 1.0-9
libpve-storage-perl: 1.0-14
vncterm: 0.9-2
vzctl: 3.0.24-1pve4
vzdump: 1.2-8
vzprocps: 2.0.11-1dso2
vzquota: 3.0.11-1
pve-qemu-kvm: 0.12.5-2
ksm-control-daemon: 1.0-4
Hope this is detailed enough for you

Best luck and thanks again