Debian KVM Freezes randomly

Darkie

New Member
Sep 16, 2015
25
0
1
Hey,
Today, my debian root froze two times and i had to reset them.
Do you know why that could happen?
I got theese errors at the same time(i think):
Code:
Dec 20 20:00:20 deb1 kernel: vmbr0: port 3(tap102i0) entering disabled state
Dec 20 20:00:20 deb1 kernel: vmbr0: port 3(tap102i0) entering disabled state
Dec 20 20:01:08 deb1 kernel: device tap102i0 entered promiscuous mode
Dec 20 20:01:08 deb1 kernel: vmbr0: port 3(tap102i0) entering forwarding state
Dec 20 20:01:18 deb1 kernel: kvm: 967219: cpu0 unhandled rdmsr: 0xc0010112
Dec 20 20:01:18 deb1 kernel: kvm: 967219: cpu0 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu0 unhandled rdmsr: 0xc0010001
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu1 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu2 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu4 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu5 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu6 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: tap102i0: no IPv6 routers present
/var/log/syslog
Code:
Dec 20 19:12:29 deb1 pveproxy[959027]: worker exit
Dec 20 19:12:29 deb1 pveproxy[3431]: worker 959027 finished
Dec 20 19:12:29 deb1 pveproxy[3431]: starting 1 worker(s)
Dec 20 19:12:29 deb1 pveproxy[3431]: worker 963346 started
Dec 20 19:16:52 deb1 pveproxy[961360]: worker exit
Dec 20 19:16:52 deb1 pveproxy[3431]: worker 961360 finished
Dec 20 19:16:52 deb1 pveproxy[3431]: starting 1 worker(s)
Dec 20 19:16:52 deb1 pveproxy[3431]: worker 963714 started
Dec 20 19:17:01 deb1 /USR/SBIN/CRON[963718]: (root) CMD (  cd / && run-parts --report /etc/cron.hourly)
Dec 20 19:20:07 deb1 pveproxy[961734]: worker exit
Dec 20 19:20:07 deb1 pveproxy[3431]: worker 961734 finished
Dec 20 19:20:07 deb1 pveproxy[3431]: starting 1 worker(s)
Dec 20 19:20:07 deb1 pveproxy[3431]: worker 963966 started
Dec 20 19:23:25 deb1 rrdcached[2939]: flushing old values
Dec 20 19:23:25 deb1 rrdcached[2939]: rotating journals
Dec 20 19:23:25 deb1 rrdcached[2939]: started new journal /var/lib/rrdcached/journal/rrd.journal.1450635805.490806
Dec 20 19:23:25 deb1 rrdcached[2939]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1450628605.490811
Dec 20 19:25:08 deb1 pvedaemon[960177]: worker exit
Dec 20 19:25:08 deb1 pvedaemon[961586]: <root@pam> successful auth for user 'root@pam'
Dec 20 19:25:08 deb1 pvedaemon[3403]: worker 960177 finished
Dec 20 19:25:08 deb1 pvedaemon[3403]: starting 1 worker(s)
Dec 20 19:25:08 deb1 pvedaemon[3403]: worker 964357 started
Dec 20 19:40:08 deb1 pvedaemon[961586]: <root@pam> successful auth for user 'root@pam'
Dec 20 19:42:46 deb1 pveproxy[963346]: worker exit
Dec 20 19:42:46 deb1 pveproxy[3431]: worker 963346 finished
Dec 20 19:42:46 deb1 pveproxy[3431]: starting 1 worker(s)
Dec 20 19:42:46 deb1 pveproxy[3431]: worker 965753 started
Dec 20 19:42:57 deb1 pvedaemon[961586]: worker exit
Dec 20 19:42:57 deb1 pvedaemon[3403]: worker 961586 finished
Dec 20 19:42:57 deb1 pvedaemon[3403]: starting 1 worker(s)
Dec 20 19:42:57 deb1 pvedaemon[3403]: worker 965766 started
Dec 20 19:44:30 deb1 pveproxy[963714]: worker exit
Dec 20 19:44:30 deb1 pveproxy[3431]: worker 963714 finished
Dec 20 19:44:30 deb1 pveproxy[3431]: starting 1 worker(s)
Dec 20 19:44:30 deb1 pveproxy[3431]: worker 965884 started
Dec 20 19:46:37 deb1 pvedaemon[958937]: worker exit
Dec 20 19:46:37 deb1 pvedaemon[3403]: worker 958937 finished
Dec 20 19:46:37 deb1 pvedaemon[3403]: starting 1 worker(s)
Dec 20 19:46:37 deb1 pvedaemon[3403]: worker 966057 started
Dec 20 19:55:09 deb1 pvedaemon[966057]: <root@pam> successful auth for user 'root@pam'
Dec 20 19:57:11 deb1 pvedaemon[966057]: <root@pam> successful auth for user 'root@pam'
Dec 20 19:57:56 deb1 pvedaemon[964357]: <root@pam> starting task UPID:deb1:000EC12B:04066A88:5676FA34:vncproxy:102:root@pam:
Dec 20 19:57:56 deb1 pvedaemon[966955]: starting vnc proxy UPID:deb1:000EC12B:04066A88:5676FA34:vncproxy:102:root@pam:
Dec 20 19:59:39 deb1 pvedaemon[967092]: stop VM 102: UPID:deb1:000EC1B4:040692D6:5676FA9B:qmstop:102:root@pam:
Dec 20 19:59:39 deb1 pvedaemon[966057]: <root@pam> starting task UPID:deb1:000EC1B4:040692D6:5676FA9B:qmstop:102:root@pam:
Dec 20 19:59:43 deb1 pvedaemon[964357]: got timeout
Dec 20 19:59:53 deb1 pvedaemon[964357]: unable to connect to VM 102 qmp socket - timeout after 31 retries
Dec 20 20:00:04 deb1 pvedaemon[964357]: unable to connect to VM 102 qmp socket - timeout after 31 retries
Dec 20 20:00:09 deb1 pvedaemon[967092]: VM still running - terminating now with SIGTERM
Dec 20 20:00:14 deb1 pvedaemon[964357]: unable to connect to VM 102 qmp socket - timeout after 31 retries
Dec 20 20:00:19 deb1 pvedaemon[967092]: VM still running - terminating now with SIGKILL
Dec 20 20:00:20 deb1 avahi-daemon[3122]: Interface tap102i0.IPv6 no longer relevant for mDNS.
Dec 20 20:00:20 deb1 avahi-daemon[3122]: Leaving mDNS multicast group on interface tap102i0.IPv6 with address fe80::c037:7bff:fe61:845a.
Dec 20 20:00:20 deb1 avahi-daemon[3122]: Withdrawing address record for fe80::c037:7bff:fe61:845a on tap102i0.
Dec 20 20:00:20 deb1 kernel: vmbr0: port 3(tap102i0) entering disabled state
Dec 20 20:00:20 deb1 kernel: vmbr0: port 3(tap102i0) entering disabled state
Dec 20 20:00:20 deb1 avahi-daemon[3122]: Withdrawing workstation service for tap102i0.
Dec 20 20:00:21 deb1 pvedaemon[967146]: starting vnc proxy UPID:deb1:000EC1EA:0406A2FA:5676FAC5:vncproxy:102:root@pam:
Dec 20 20:00:21 deb1 pvedaemon[964357]: <root@pam> starting task UPID:deb1:000EC1EA:0406A2FA:5676FAC5:vncproxy:102:root@pam:
Dec 20 20:00:21 deb1 ntpd[3521]: Deleting interface #13 tap102i0, fe80::c037:7bff:fe61:845a#123, interface stats: received=0, sent=0, dropped=0, active_time=111564 secs
Dec 20 20:00:21 deb1 ntpd[3521]: peers refreshed
Dec 20 20:00:21 deb1 qm[967148]: VM 102 qmp command failed - VM 102 not running
Dec 20 20:00:21 deb1 pvedaemon[967146]: command '/bin/nc -l -p 5900 -w 10 -c '/usr/sbin/qm vncproxy 102 2>/dev/null'' failed: exit code 255
Dec 20 20:00:24 deb1 vnstatd[2670]: Interface "tap102i0" disabled.
Dec 20 20:01:07 deb1 pvedaemon[967214]: start VM 102: UPID:deb1:000EC22E:0406B4F8:5676FAF3:qmstart:102:root@pam:
Dec 20 20:01:07 deb1 pvedaemon[965766]: <root@pam> starting task UPID:deb1:000EC22E:0406B4F8:5676FAF3:qmstart:102:root@pam:
Dec 20 20:01:08 deb1 kernel: device tap102i0 entered promiscuous mode
Dec 20 20:01:08 deb1 kernel: vmbr0: port 3(tap102i0) entering forwarding state
Dec 20 20:01:10 deb1 vnstatd[2670]: Interface "tap102i0" enabled.
Dec 20 20:01:10 deb1 avahi-daemon[3122]: Joining mDNS multicast group on interface tap102i0.IPv6 with address fe80::2478:e4ff:fec7:77b3.
Dec 20 20:01:10 deb1 avahi-daemon[3122]: New relevant interface tap102i0.IPv6 for mDNS.
Dec 20 20:01:10 deb1 avahi-daemon[3122]: Registering new address record for fe80::2478:e4ff:fec7:77b3 on tap102i0.*.
Dec 20 20:01:11 deb1 ntpd[3521]: Listen normally on 14 tap102i0 fe80::2478:e4ff:fec7:77b3 UDP 123
Dec 20 20:01:11 deb1 ntpd[3521]: peers refreshed
Dec 20 20:01:18 deb1 kernel: kvm: 967219: cpu0 unhandled rdmsr: 0xc0010112
Dec 20 20:01:18 deb1 kernel: kvm: 967219: cpu0 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu0 unhandled rdmsr: 0xc0010001
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu1 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu2 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu4 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu5 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: kvm: 967219: cpu6 unhandled rdmsr: 0xc0010048
Dec 20 20:01:19 deb1 kernel: tap102i0: no IPv6 routers present
Dec 20 20:01:35 deb1 pveproxy[963966]: worker exit
Dec 20 20:01:35 deb1 pveproxy[3431]: worker 963966 finished
Dec 20 20:01:35 deb1 pveproxy[3431]: starting 1 worker(s)
Dec 20 20:01:35 deb1 pveproxy[3431]: worker 967343 started
Dec 20 20:01:58 deb1 pveproxy[965753]: worker exit
Dec 20 20:01:58 deb1 pveproxy[3431]: worker 965753 finished
Dec 20 20:01:58 deb1 pveproxy[3431]: starting 1 worker(s)
pveversion -v
Code:
proxmox-ve-2.6.32: 3.4-166 (running kernel: 2.6.32-43-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-2.6.32-41-pve: 2.6.32-164
pve-kernel-2.6.32-26-pve: 2.6.32-114
pve-kernel-2.6.32-43-pve: 2.6.32-166
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-19
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-34
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-13
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1
 
Last edited:
Dec 20 19:59:39 deb1 pvedaemon[967092]: stop VM 102: UPID:deb1:000EC1B4:040692D6:5676FA9B:qmstop:102:root@pam:

This was executed before the kernel messages apeard, also those message from the kernel are "normal" [1]

So to give us an idea what happened please look in the VMs logs. Storage hang up or something there like?
 
Hey,
Sorry for not responding for a while.... i had some personal things to do...
Ok, this kvm on proxmox freezes everyday for an unknown reason: no nginx server response, no ssh response, no response in (no)VNC sessions, its completely dead. The host has always ~10gb ram free, so the ram shouldn't be the problem.
I don't find any other log files i could provide here :(
 
@windinternet:2 network devices, one for local communication and one official internet connection.
Linux 3.X/2.6 Kernel (l26), debian7 wheezy 64 bit


Network(/etc/network/interfaces):
Code:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
allow-hotplug eth0
iface eth0 inet static
  address *ipadress*
  netmask 255.255.255.248
  network *hidden net*
  broadcast *hidden net*
  gateway *hidden gateway*
  # dns-* options are implemented by the resolvconf package, if installed
  dns-nameservers 8.8.8.8


allow-hotplug eth1
iface eth1 inet static
  address 10.10.10.3
  netmask 255.255.255.0
  gateway 10.10.10.1
  # dns-* options are implemented by the resolvconf package, if installed
  dns-nameservers 8.8.8.8
 
Last edited:
With network device I meant, the device you selected in the KVM configuration: virtio, e1000, vmware, realtek

With Kernel 3.X/2.6(l26) you mean 3.x in the guest and 2.6.26 on the host (PVE 3.4?).
 
@windinternet Yea, both network interfaces are virtio.
Linux 3.X/2.6 Kernel (l26) is the configuration in proxmox.

cat /proc/version of the client container os:
Code:
Linux version 3.2.0-4-amd64 (debian-kernel@lists.debian.org) (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.68-1+deb7u6

The host:
Code:
Linux version 2.6.32-43-pve (root@lola) (gcc version 4.7.2 (Debian 4.7.2-5) ) #1 SMP Tue Oct 27 09:55:55 CET 2015
I hope that is what you wanted to know.
 
Ok, the problem is probably with the virtio causing segmentation offload problems in the interface between host and guest, eventually shutting down all network communication with the guest. The newer kernel in the guest will try to use such functions. I guess if you can access the qm tool from the host, it would still show it running ok.

Try switching to e1000 for a time.
 
I've changed this and restarted the server, if this error still happens, i will notify you :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!