Hi,
We've hit PVE3 hypervisor hang (blank console, no hints in syslog, only reset helps) after upgrading 3.10 kernel to
pve-kernel-3.10.0-15-pve_3.10.0-40
(after 2-3 days of work).
No such problems with earlier 2.6 and 3.10 kernels on this machine (after downgrade t 3.10.0-14-pve system worked without issues).
After upgrading kernel to
pve-kernel-3.10.0-16-pve_3.10.0-42
same problem occurred after about 1 day.
Seems there is something wrong with kernels pve-kernel-3.10.0-15 and pve-kernel-3.10.0-16.
The system is standalone hypervisor with proxmox-ve-2.6.32 package deinstalled manually (switch to 3.10 kernel):
# pveversion -v
proxmox-ve-2.6.32: not correctly installed (running kernel: 3.10.0-14-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-2.6.32-40-pve: 2.6.32-160
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-3.10.0-13-pve: 3.10.0-38
pve-kernel-2.6.32-39-pve: 2.6.32-157
pve-kernel-2.6.32-41-pve: 2.6.32-164
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-3.10.0-14-pve: 3.10.0-39
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-3.10.0-12-pve: 3.10.0-37
pve-kernel-2.6.32-38-pve: 2.6.32-155
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-19
qemu-server: 3.4-6
pve-firmware: 1.1-5
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-34
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-18
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Regards,
Pawel
We've hit PVE3 hypervisor hang (blank console, no hints in syslog, only reset helps) after upgrading 3.10 kernel to
pve-kernel-3.10.0-15-pve_3.10.0-40
(after 2-3 days of work).
No such problems with earlier 2.6 and 3.10 kernels on this machine (after downgrade t 3.10.0-14-pve system worked without issues).
After upgrading kernel to
pve-kernel-3.10.0-16-pve_3.10.0-42
same problem occurred after about 1 day.
Seems there is something wrong with kernels pve-kernel-3.10.0-15 and pve-kernel-3.10.0-16.
The system is standalone hypervisor with proxmox-ve-2.6.32 package deinstalled manually (switch to 3.10 kernel):
# pveversion -v
proxmox-ve-2.6.32: not correctly installed (running kernel: 3.10.0-14-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-2.6.32-40-pve: 2.6.32-160
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-3.10.0-13-pve: 3.10.0-38
pve-kernel-2.6.32-39-pve: 2.6.32-157
pve-kernel-2.6.32-41-pve: 2.6.32-164
pve-kernel-2.6.32-33-pve: 2.6.32-138
pve-kernel-3.10.0-14-pve: 3.10.0-39
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-3.10.0-12-pve: 3.10.0-37
pve-kernel-2.6.32-38-pve: 2.6.32-155
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-19
qemu-server: 3.4-6
pve-firmware: 1.1-5
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-34
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-18
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
Regards,
Pawel