We recently upgraded two servers with dual socket Xeon boards.
One server has two E5-2687W v3
The other has two E5-2620 v3
I have three debian wheezy guests, one on the 2687W and two on the 2620 that have had issues.
These guests are currently kicked out of production so they just sit there idle al day.
The only real load is a cron job that runs every few minutes, it makes some http requests and reads/writes some tiny files.
The only clue I have is some kernel message in the guest about jbd2/dm-0-8 being blocked for more than 120 seconds.
I don't have the exact error but was something like "INFO: task jbd2/dm-0-8 blocked for more than 120 seconds."
IO becomes stalled and load keeps rising.
Only way to recover is to stop/start the VM.
Guests worked fine before the upgrade.
The only components changed where CPU/RAM/Motherboard
Still using same RAID card and disks.
Storage is LVM over DRBD.
Oddly no issues with Windows guests, so far.
Any suggestions?
VM config file:
One server has two E5-2687W v3
The other has two E5-2620 v3
I have three debian wheezy guests, one on the 2687W and two on the 2620 that have had issues.
These guests are currently kicked out of production so they just sit there idle al day.
The only real load is a cron job that runs every few minutes, it makes some http requests and reads/writes some tiny files.
The only clue I have is some kernel message in the guest about jbd2/dm-0-8 being blocked for more than 120 seconds.
I don't have the exact error but was something like "INFO: task jbd2/dm-0-8 blocked for more than 120 seconds."
IO becomes stalled and load keeps rising.
Only way to recover is to stop/start the VM.
Guests worked fine before the upgrade.
The only components changed where CPU/RAM/Motherboard
Still using same RAID card and disks.
Storage is LVM over DRBD.
Oddly no issues with Windows guests, so far.
Any suggestions?
VM config file:
Code:
# cat /etc/pve/qemu-server/107.conf
bootdisk: virtio0
cores: 1
ide2: none,media=cdrom
memory: 1280
name: XXXXXXXXXXX
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr10
onboot: 1
ostype: l26
sockets: 1
virtio0: vm9-vm10:vm-107-disk-1,cache=directsync,size=3G
Code:
# pveversion -v
proxmox-ve-2.6.32: 3.3-139 (running kernel: 2.6.32-34-pve)
pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
pve-kernel-2.6.32-20-pve: 2.6.32-100
pve-kernel-2.6.32-12-pve: 2.6.32-68
pve-kernel-2.6.32-19-pve: 2.6.32-96
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-13-pve: 2.6.32-72
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-34-pve: 2.6.32-139
pve-kernel-2.6.32-14-pve: 2.6.32-74
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-18-pve: 2.6.32-88
pve-kernel-2.6.32-23-pve: 2.6.32-109
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-1
pve-cluster: 3.0-15
qemu-server: 3.3-3
pve-firmware: 1.1-3
libpve-common-perl: 3.0-19
libpve-access-control: 3.0-15
libpve-storage-perl: 3.0-25
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.1-10
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1