VM Locking up under load CentOS6

adamb

Famous Member
Mar 1, 2012
1,329
77
113
Hey all, having a good time doing some testing and I seem to be running into a slight issue. This is my first cluster utilizing central storage and the latest version of Proxmox. My other clusters are 2.3 with dual drbd. I am running a CentOS6 VM with the standard latest kernel (2.6.32-358.18.1.el6.x86_64). While running stress tests the VM will lock up and the CPU seems to just gradually increase as I watch the stats in the gui. I run this same load on my other clusters and don't have this issue, so I am unsure if it is something with the new version.

I am wondering if I should just stick with 2.3 for the time being. I appreciate any help I can get.

If interested this is the tool I am using

http://linux.die.net/man/1/stress

root@testprox:~# pveversion -v
proxmox-ve-2.6.32: 3.1-113 (running kernel: 2.6.32-25-pve)
pve-manager: 3.1-16 (running version: 3.1-16/6a143a40)
pve-kernel-2.6.32-25-pve: 2.6.32-113
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.0-2
pve-cluster: 3.0-7
qemu-server: 3.1-5
pve-firmware: 1.0-23
libpve-common-perl: 3.0-6
libpve-access-control: 3.0-6
libpve-storage-perl: 3.0-13
pve-libspice-server1: 0.12.4-2
vncterm: 1.1-4
vzctl: 4.0-1pve3
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.4-17
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.0-2
 
Im just running into too many issues with 3.1. Heading back to 2.3 which I know is rock solid. I will report back if I continue to have the same issues on version 2.3.
 
I ended up sticking with 3.1 but did a fresh install on both nodes and completely re-setup the cluster.

The stress test will make the VM virtualy unresponsive for a extended period of time. This doesn't seem to happen on any of my other clusters running this exact load in a VM. The VM will come back but there is no telling when. I can simulate this issue with stress, iozone and bonnie. Something just doesn't seem right. I will more than likely open a ticket on this issue once my account is approved. Appreciate any input on this one!


UPDATE

I am starting to wonder if it has something to do with the central storage. I am going to do some pure cpu tests to see if it will lock up, then do some specific IO tests.

In case anyone is interested the hardware in use is as follow's.
2xHP Prolient DL380p
1xHP P2000 storage array

UPDATE #2

Interesting after letting the VM sit for awhile during an iozone test and it never came back. On the console I see init: Disconnected from system bus. Am I right in thinking that I am losing my storage when this takes place? There are a ton of kernel errors on the node which the VM was running.
 
Last edited:
How is the storage used, iscsi or nfs?
Format of the disk image, raw, qcow2 or vmdk?

I am using SAS with multipathing. I am presenting the block device to LVM then to proxmox. I think I will remove multipathing from the equation to see if that is the issue.

I use raw format on all of my VM's.
 
Just wanted to provide an update on this issue. I have narrowed it down to the file system mount options of the VM. We have to use "data=journal" within the VM to ensure data integrity is never compromised. Looks like there is a bug in the current kernel which is causing this. Should be fixed with the next kernel version of Rhel/CentOS.

If I go back to "data=ordered" I no longer hit the lockups.

https://bugzilla.redhat.com/show_bug.cgi?id=834919
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!