Hello community,
after the last update to:
Kernel Version Linux 4.4.40-1-pve
#1 SMP PVE 4.4.40-82
i got a wired behaviour.
First i have to say, that after the last reboot the root filesystem had some errors and proxmox didn't start.
On initramfs command line i did: fsck /dev/mapper/pve-root -y
After that, the system start booting normal with newest kernel version above.
But no kvm virtual machine was starting. I got the message, that kvm module could not get loaded:
command 'kvm -version' failed: got signal 11
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 ...' failed: got signal 11
So i did a modprobe kvm and it results in the same error.
After that i tried to boot with a older kernel. In that case it was:
Kernel Version Linux 4.4.19-1-pve
#1 SMP PVE 4.4.19-66
I used this kernel, because it was the last one the system works with. I do not reboot after every kernel update. That's too much. I just got one node with any HA options.
With this Kernel, system boots normal and the VMs started - but immediatly. That was wired because i set an order and wait time for those. And i've got the same error message for starting the VMs, for example:
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 ...' failed: got signal 11
Ok that's not so bad - VMs are working, so i reinstall the latest kernel, just in case that something gets damaged with this filesystem errors.
Today i got some time and i tried reboot with the new kernel. VMs can get started now with this kernel, but the same error message appears. I reset the order and wait time to defaults i reboot again. The VMs started but still with this error message.
Any idea what could happen there?
thanks in advance
after the last update to:
Kernel Version Linux 4.4.40-1-pve
#1 SMP PVE 4.4.40-82
i got a wired behaviour.
First i have to say, that after the last reboot the root filesystem had some errors and proxmox didn't start.
On initramfs command line i did: fsck /dev/mapper/pve-root -y
After that, the system start booting normal with newest kernel version above.
But no kvm virtual machine was starting. I got the message, that kvm module could not get loaded:
command 'kvm -version' failed: got signal 11
Could not access KVM kernel module: No such file or directory
failed to initialize KVM: No such file or directory
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 ...' failed: got signal 11
So i did a modprobe kvm and it results in the same error.
After that i tried to boot with a older kernel. In that case it was:
Kernel Version Linux 4.4.19-1-pve
#1 SMP PVE 4.4.19-66
I used this kernel, because it was the last one the system works with. I do not reboot after every kernel update. That's too much. I just got one node with any HA options.
With this Kernel, system boots normal and the VMs started - but immediatly. That was wired because i set an order and wait time for those. And i've got the same error message for starting the VMs, for example:
TASK ERROR: start failed: command '/usr/bin/kvm -id 101 ...' failed: got signal 11
Ok that's not so bad - VMs are working, so i reinstall the latest kernel, just in case that something gets damaged with this filesystem errors.
Today i got some time and i tried reboot with the new kernel. VMs can get started now with this kernel, but the same error message appears. I reset the order and wait time to defaults i reboot again. The VMs started but still with this error message.
Any idea what could happen there?
thanks in advance