vcpu unhandled rdmsr

kcastner

New Member
Mar 7, 2017
2
0
1
25
Hallo,

ich erhalte seit mehreren Tagen Fehlermeldungen wie folgt:

Mar 07 14:00:58 pve kernel: kvm [8606]: vcpu0 unhandled rdmsr: 0xc0010048
Mar 07 14:00:58 pve kernel: kvm [8606]: vcpu1 unhandled rdmsr: 0xc0010048
Mar 07 14:01:03 pve kernel: kvm [8606]: vcpu0 unhandled rdmsr: 0x3a
Mar 07 14:01:03 pve kernel: kvm [8606]: vcpu0 unhandled rdmsr: 0xd90
Mar 07 14:01:03 pve kernel: kvm [8606]: vcpu0 unhandled rdmsr: 0xc0000103

Diese Meldungen erscheinen nur wenn ich ein Linux System als KVM boote. Es dauert ca 3-5 Sekunden bis die Fehlermeldungen erscheinen.

Auf dem Node laufen aktuell nur 3 Linux KVM und 1ne Windows KVM.

Mar 07 13:24:02 pve pvedaemon[4546]: starting vnc proxy UPID:pve:000011C2:006E1675:58BEA662:vncproxy:102:root@pam:
Mar 07 13:24:02 pve pvedaemon[30893]: <root@pam> starting task UPID:pve:000011C2:006E1675:58BEA662:vncproxy:102:root@pam:
Mar 07 13:24:16 pve pvedaemon[1511]: <root@pam> starting task UPID:pve:000011DE:006E1BE0:58BEA670:vncproxy:101:root@pam:
Mar 07 13:24:16 pve pvedaemon[4574]: starting vnc proxy UPID:pve:000011DE:006E1BE0:58BEA670:vncproxy:101:root@pam:
Mar 07 13:25:20 pve pvedaemon[1511]: <root@pam> starting task UPID:pve:00001243:006E34DF:58BEA6B0:vncproxy:101:root@pam:
Mar 07 13:25:20 pve pvedaemon[4675]: starting vnc proxy UPID:pve:00001243:006E34DF:58BEA6B0:vncproxy:101:root@pam:
Mar 07 13:26:03 pve pvedaemon[32130]: <root@pam> starting task UPID:pve:00001285:006E459F:58BEA6DB:vncproxy:102:root@pam:
Mar 07 13:26:03 pve pvedaemon[4741]: starting vnc proxy UPID:pve:00001285:006E459F:58BEA6DB:vncproxy:102:root@pam:
Mar 07 13:26:21 pve pvedaemon[32130]: worker exit
Mar 07 13:26:21 pve pvedaemon[1285]: worker 32130 finished
Mar 07 13:26:21 pve pvedaemon[1285]: starting 1 worker(s)
Mar 07 13:26:21 pve pvedaemon[1285]: worker 4778 started
Mar 07 13:26:23 pve pvedaemon[4781]: starting vnc proxy UPID:pve:000012AD:006E4D80:58BEA6EF:vncproxy:101:root@pam:
Mar 07 13:26:23 pve pvedaemon[30893]: <root@pam> starting task UPID:pve:000012AD:006E4D80:58BEA6EF:vncproxy:101:root@pam:
Mar 07 13:27:10 pve pveproxy[12314]: worker 3345 finished
Mar 07 13:27:10 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 13:27:10 pve pveproxy[12314]: worker 4852 started
Mar 07 13:27:14 pve pveproxy[4851]: got inotify poll request in wrong process - disabling inotify
Mar 07 13:27:27 pve pvedaemon[30893]: <root@pam> starting task UPID:pve:00001316:006E6679:58BEA72F:vncproxy:101:root@pam:
Mar 07 13:27:27 pve pvedaemon[4886]: starting vnc proxy UPID:pve:00001316:006E6679:58BEA72F:vncproxy:101:root@pam:
Mar 07 13:27:57 pve pvedaemon[4933]: starting vnc proxy UPID:pve:00001345:006E7262:58BEA74D:vncshell::root@pam:
Mar 07 13:27:57 pve pvedaemon[1511]: <root@pam> starting task UPID:pve:00001345:006E7262:58BEA74D:vncshell::root@pam:
Mar 07 13:27:57 pve pvedaemon[4933]: launch command: /usr/bin/vncterm -rfbport 5903 -timeout 10 -authpath /nodes/pve -perm Sys.Console -notls -listen localhost -c /bin/bash -l
Mar 07 13:28:00 pve pveproxy[4851]: worker exit
Mar 07 13:28:02 pve pvedaemon[4948]: starting vnc proxy UPID:pve:00001354:006E7420:58BEA752:vncproxy:102:root@pam:
Mar 07 13:28:02 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:00001354:006E7420:58BEA752:vncproxy:102:root@pam:
Mar 07 13:28:30 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:00001385:006E7F19:58BEA76E:vncproxy:101:root@pam:
Mar 07 13:28:30 pve pvedaemon[4997]: starting vnc proxy UPID:pve:00001385:006E7F19:58BEA76E:vncproxy:101:root@pam:
Mar 07 13:29:04 pve pvedaemon[1285]: worker 1511 finished
Mar 07 13:29:04 pve pvedaemon[1285]: starting 1 worker(s)
Mar 07 13:29:04 pve pvedaemon[1285]: worker 5049 started
Mar 07 13:29:07 pve pvedaemon[5048]: got inotify poll request in wrong process - disabling inotify
Mar 07 13:29:07 pve pvedaemon[5048]: worker exit
Mar 07 13:29:34 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:000013EC:006E983F:58BEA7AE:vncproxy:101:root@pam:
Mar 07 13:29:34 pve pvedaemon[5100]: starting vnc proxy UPID:pve:000013EC:006E983F:58BEA7AE:vncproxy:101:root@pam:
Mar 07 13:29:44 pve pvedaemon[4778]: <root@pam> successful auth for user 'root@pam'
Mar 07 13:30:03 pve pvedaemon[5147]: starting vnc proxy UPID:pve:0000141B:006EA34D:58BEA7CB:vncproxy:102:root@pam:
Mar 07 13:30:03 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:0000141B:006EA34D:58BEA7CB:vncproxy:102:root@pam:
Mar 07 13:30:33 pve pveproxy[12314]: worker 3055 finished
Mar 07 13:30:33 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 13:30:33 pve pveproxy[12314]: worker 5199 started
Mar 07 13:30:35 pve pveproxy[5198]: got inotify poll request in wrong process - disabling inotify
Mar 07 13:30:37 pve pveproxy[5198]: worker exit
Mar 07 13:30:38 pve pvedaemon[30893]: <root@pam> starting task UPID:pve:00001459:006EB11A:58BEA7EE:vncproxy:101:root@pam:
Mar 07 13:30:38 pve pvedaemon[5209]: starting vnc proxy UPID:pve:00001459:006EB11A:58BEA7EE:vncproxy:101:root@pam:
Mar 07 13:31:02 pve pvedaemon[5276]: Can't exec "multipath": No such file or directory at /usr/share/perl5/PVE/Report.pm line 96, <COMMAND> line 274.
Mar 07 13:31:02 pve pvedaemon[5049]: readline() on closed filehandle COMMAND at /usr/share/perl5/PVE/Report.pm line 97, <COMMAND> line 274.
Mar 07 13:31:02 pve pvedaemon[5277]: Can't exec "multipath": No such file or directory at /usr/share/perl5/PVE/Report.pm line 96, <COMMAND> line 274.
Mar 07 13:31:02 pve pvedaemon[5049]: readline() on closed filehandle COMMAND at /usr/share/perl5/PVE/Report.pm line 97, <COMMAND> line 274.
Mar 07 13:31:35 pve pvedaemon[30893]: worker exit
Mar 07 13:31:36 pve pvedaemon[1285]: worker 30893 finished
Mar 07 13:31:36 pve pvedaemon[1285]: starting 1 worker(s)
Mar 07 13:31:36 pve pvedaemon[1285]: worker 5333 started
Mar 07 13:31:42 pve pvedaemon[5346]: starting vnc proxy UPID:pve:000014E2:006ECA20:58BEA82E:vncproxy:101:root@pam:
Mar 07 13:31:42 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:000014E2:006ECA20:58BEA82E:vncproxy:101:root@pam:
Mar 07 13:32:02 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:00001503:006ED1CF:58BEA842:vncproxy:102:root@pam:
Mar 07 13:32:02 pve pvedaemon[5379]: starting vnc proxy UPID:pve:00001503:006ED1CF:58BEA842:vncproxy:102:root@pam:
Mar 07 13:32:45 pve pvedaemon[5447]: starting vnc proxy UPID:pve:00001547:006EE2BA:58BEA86D:vncproxy:101:root@pam:
Mar 07 13:32:45 pve pvedaemon[5049]: <root@pam> starting task UPID:pve:00001547:006EE2BA:58BEA86D:vncproxy:101:root@pam:
Mar 07 13:33:16 pve pvedaemon[5506]: stop VM 103: UPID:pve:00001582:006EEEF4:58BEA88C:qmstop:103:root@pam:
Mar 07 13:33:16 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:00001582:006EEEF4:58BEA88C:qmstop:103:root@pam:
Mar 07 13:33:17 pve kernel: vmbr0: port 4(tap103i0) entered disabled state
Mar 07 13:33:47 pve kernel: vmbr0: port 3(tap102i0) entered disabled state
Mar 07 13:33:47 pve pvedaemon[4778]: client closed connection
Mar 07 13:33:49 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:000015C9:006EFBC2:58BEA8AD:vncproxy:101:root@pam:
Mar 07 13:33:49 pve pvedaemon[5577]: starting vnc proxy UPID:pve:000015C9:006EFBC2:58BEA8AD:vncproxy:101:root@pam:
Mar 07 13:33:50 pve pvedaemon[5379]: command '/bin/nc6 -l -p 5900 -w 10 -e '/usr/sbin/qm vncproxy 102 2>/dev/null'' failed: exit code 1
Mar 07 13:33:57 pve pvedaemon[5049]: VM 102 qmp command failed - VM 102 not running
Mar 07 13:33:57 pve pvedaemon[5593]: shutdown VM 102: UPID:pve:000015D9:006EFED5:58BEA8B5:qmshutdown:102:root@pam:
Mar 07 13:33:57 pve pvedaemon[5049]: <root@pam> starting task UPID:pve:000015D9:006EFED5:58BEA8B5:qmshutdown:102:root@pam:
Mar 07 13:34:07 pve pvedaemon[5608]: starting vnc proxy UPID:pve:000015E8:006F0297:58BEA8BF:vncproxy:101:root@pam:
Mar 07 13:34:07 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:000015E8:006F0297:58BEA8BF:vncproxy:101:root@pam:
Mar 07 13:34:15 pve pveproxy[31530]: worker exit
Mar 07 13:34:19 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:00001601:006F0756:58BEA8CB:vncproxy:103:root@pam:
Mar 07 13:34:19 pve pvedaemon[5633]: starting vnc proxy UPID:pve:00001601:006F0756:58BEA8CB:vncproxy:103:root@pam:
Mar 07 13:34:20 pve qm[5636]: VM 103 qmp command failed - VM 103 not running
Mar 07 13:34:34 pve kernel: vmbr0: port 2(tap101i0) entered disabled state
Mar 07 13:34:34 pve pvedaemon[5608]: command '/bin/nc6 -l -p 5900 -w 10 -e '/usr/sbin/qm vncproxy 101 2>/dev/null'' failed: exit code 1
Mar 07 13:34:34 pve pvedaemon[5577]: command '/bin/nc6 -l -p 5902 -w 10 -e '/usr/sbin/qm vncproxy 101 2>/dev/null'' failed: exit code 1
Mar 07 13:34:35 pve pvedaemon[5673]: starting vnc proxy UPID:pve:00001629:006F0DB6:58BEA8DB:vncproxy:101:root@pam:
Mar 07 13:34:35 pve pvedaemon[5049]: <root@pam> starting task UPID:pve:00001629:006F0DB6:58BEA8DB:vncproxy:101:root@pam:
Mar 07 13:34:36 pve qm[5676]: VM 101 qmp command failed - VM 101 not running
Mar 07 13:36:02 pve pvedaemon[4778]: <root@pam> successful auth for user 'root@pam'
Mar 07 13:36:16 pve pvedaemon[5823]: starting vnc proxy UPID:pve:000016BF:006F34FF:58BEA940:vncshell::root@pam:
Mar 07 13:36:16 pve pvedaemon[5049]: <root@pam> starting task UPID:pve:000016BF:006F34FF:58BEA940:vncshell::root@pam:
Mar 07 13:36:16 pve pvedaemon[5823]: launch command: /usr/bin/vncterm -rfbport 5900 -timeout 10 -authpath /nodes/pve -perm Sys.Console -notls -listen localhost -c /bin/bash -l
Mar 07 13:37:38 pve pvedaemon[5049]: <root@pam> successful auth for user 'root@pam'
Mar 07 13:37:41 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:000018A5:006F567D:58BEA995:vncshell::root@pam:
Mar 07 13:37:41 pve pvedaemon[6309]: starting vnc proxy UPID:pve:000018A5:006F567D:58BEA995:vncshell::root@pam:
Mar 07 13:37:41 pve pvedaemon[6309]: launch command: /usr/bin/vncterm -rfbport 5901 -timeout 10 -authpath /nodes/pve -perm Sys.Console -notls -listen localhost -c /bin/bash -l
Mar 07 13:38:44 pve pvedaemon[5049]: <root@pam> successful auth for user 'root@pam'
Mar 07 13:38:45 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:0000190E:006F6F8B:58BEA9D5:vncshell::root@pam:
Mar 07 13:38:45 pve pvedaemon[6414]: starting vnc proxy UPID:pve:0000190E:006F6F8B:58BEA9D5:vncshell::root@pam:
Mar 07 13:38:45 pve pvedaemon[6414]: launch command: /usr/bin/vncterm -rfbport 5900 -timeout 10 -authpath /nodes/pve -perm Sys.Console -notls -listen localhost -c /bin/bash -l
Mar 07 13:39:29 pve pveproxy[3929]: worker exit
Mar 07 13:39:29 pve pveproxy[12314]: worker 3929 finished
Mar 07 13:39:29 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 13:39:29 pve pveproxy[12314]: worker 6491 started
Mar 07 13:41:33 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:00001A16:006FB116:58BEAA7D:qmstart:100:root@pam:
Mar 07 13:41:33 pve pvedaemon[6678]: start VM 100: UPID:pve:00001A16:006FB116:58BEAA7D:qmstart:100:root@pam:
Mar 07 13:41:34 pve systemd[1]: Starting 100.scope.
Mar 07 13:41:34 pve systemd[1]: Started 100.scope.
Mar 07 13:41:35 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 13:41:35 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 13:41:45 pve kernel: kvm [6687]: vcpu0 unhandled rdmsr: 0xc0010048
Mar 07 13:41:54 pve kernel: kvm [6687]: vcpu0 unhandled rdmsr: 0x3a
Mar 07 13:41:54 pve kernel: kvm [6687]: vcpu0 unhandled rdmsr: 0xd90
Mar 07 13:41:54 pve kernel: kvm [6687]: vcpu0 unhandled rdmsr: 0xc0000103
Mar 07 13:42:33 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:00001A83:006FC84F:58BEAAB9:vncproxy:100:root@pam:
Mar 07 13:42:33 pve pvedaemon[6787]: starting vnc proxy UPID:pve:00001A83:006FC84F:58BEAAB9:vncproxy:100:root@pam:
Mar 07 13:44:43 pve pveproxy[12314]: worker 5199 finished
Mar 07 13:44:43 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 13:44:43 pve pveproxy[12314]: worker 6984 started
Mar 07 13:44:43 pve pveproxy[6983]: got inotify poll request in wrong process - disabling inotify
Mar 07 13:44:45 pve pvedaemon[4778]: <root@pam> successful auth for user 'root@pam'
Mar 07 13:47:18 pve pvestatd[1275]: status update time (6.320 seconds)
Mar 07 13:49:02 pve pvedaemon[4778]: update new package list: /var/lib/pve-manager/pkgupdates
Mar 07 13:49:02 pve pvedaemon[4778]: error reading cached package status in /var/lib/pve-manager/pkgupdates
Mar 07 13:51:02 pve pvedaemon[5333]: <root@pam> successful auth for user 'root@pam'
Mar 07 13:51:44 pve pveproxy[4852]: worker exit
Mar 07 13:51:44 pve pveproxy[12314]: worker 4852 finished
Mar 07 13:51:44 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 13:51:44 pve pveproxy[12314]: worker 7619 started
Mar 07 13:52:31 pve pvedaemon[6787]: command '/bin/nc6 -l -p 5901 -w 10 -e '/usr/sbin/qm vncproxy 100 2>/dev/null'' failed: exit code 1
Mar 07 13:52:34 pve pvedaemon[7706]: starting vnc proxy UPID:pve:00001E1A:0070B312:58BEAD12:vncproxy:100:root@pam:
Mar 07 13:52:34 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:00001E1A:0070B312:58BEAD12:vncproxy:100:root@pam:
Mar 07 13:52:35 pve qm[7709]: VM 100 qmp command failed - VM 100 not running
Mar 07 13:52:38 pve pvedaemon[5049]: <root@pam> successful auth for user 'root@pam'
Mar 07 13:53:44 pve pvedaemon[5333]: <root@pam> successful auth for user 'root@pam'
Mar 07 13:53:48 pve pvedaemon[7819]: start VM 100: UPID:pve:00001E8B:0070CFF1:58BEAD5C:qmstart:100:root@pam:
Mar 07 13:53:48 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:00001E8B:0070CFF1:58BEAD5C:qmstart:100:root@pam:
Mar 07 13:53:48 pve systemd[1]: Starting 100.scope.
Mar 07 13:53:48 pve systemd[1]: Started 100.scope.
Mar 07 13:53:49 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 13:53:49 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 13:53:59 pve kernel: kvm [7827]: vcpu0 unhandled rdmsr: 0xc0010048
Mar 07 13:54:08 pve kernel: kvm [7827]: vcpu0 unhandled rdmsr: 0x3a
Mar 07 13:54:08 pve kernel: kvm [7827]: vcpu0 unhandled rdmsr: 0xd90
Mar 07 13:54:08 pve kernel: kvm [7827]: vcpu0 unhandled rdmsr: 0xc0000103
Mar 07 13:55:38 pve pvedaemon[8012]: start VM 103: UPID:pve:00001F4C:0070FAE2:58BEADCA:qmstart:103:root@pam:
Mar 07 13:55:38 pve pvedaemon[5049]: <root@pam> starting task UPID:pve:00001F4C:0070FAE2:58BEADCA:qmstart:103:root@pam:
Mar 07 13:55:38 pve systemd[1]: Starting 103.scope.
Mar 07 13:55:38 pve systemd[1]: Started 103.scope.
Mar 07 13:55:39 pve kernel: device tap103i0 entered promiscuous mode
Mar 07 13:55:39 pve kernel: vmbr0: port 2(tap103i0) entered forwarding state
Mar 07 13:55:39 pve kernel: vmbr0: port 2(tap103i0) entered forwarding state
Mar 07 13:55:42 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 13:55:42 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 13:57:07 pve systemd-timesyncd[775]: interval/delta/delay/jitter/drift 2048s/+0.005s/0.046s/0.034s/-86ppm
Mar 07 13:57:08 pve pvedaemon[5049]: <root@pam> starting task UPID:pve:00001FFE:00711E5E:58BEAE24:qmshutdown:100:root@pam:
Mar 07 13:57:08 pve pvedaemon[8190]: shutdown VM 100: UPID:pve:00001FFE:00711E5E:58BEAE24:qmshutdown:100:root@pam:
Mar 07 13:57:09 pve pveproxy[6984]: worker exit
Mar 07 13:57:09 pve pveproxy[12314]: worker 6984 finished
Mar 07 13:57:09 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 13:57:09 pve pveproxy[12314]: worker 8191 started
Mar 07 13:57:13 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:0000200A:00712011:58BEAE29:qmshutdown:103:root@pam:
Mar 07 13:57:13 pve pvedaemon[8202]: shutdown VM 103: UPID:pve:0000200A:00712011:58BEAE29:qmshutdown:103:root@pam:
Mar 07 13:57:26 pve kernel: vmbr0: port 2(tap103i0) entered disabled state
Mar 07 13:58:05 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:00002067:00713468:58BEAE5D:qmstart:101:root@pam:
Mar 07 13:58:05 pve pvedaemon[8295]: start VM 101: UPID:pve:00002067:00713468:58BEAE5D:qmstart:101:root@pam:
Mar 07 13:58:05 pve systemd[1]: Starting 101.scope.
Mar 07 13:58:05 pve systemd[1]: Started 101.scope.
Mar 07 13:58:06 pve kernel: device tap101i0 entered promiscuous mode
Mar 07 13:58:06 pve kernel: vmbr0: port 2(tap101i0) entered forwarding state
Mar 07 13:58:06 pve kernel: vmbr0: port 2(tap101i0) entered forwarding state
Mar 07 13:58:07 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 13:58:07 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 13:58:17 pve kernel: kvm [8303]: vcpu0 unhandled rdmsr: 0xc001100d
Mar 07 13:58:18 pve kernel: kvm [8303]: vcpu1 unhandled rdmsr: 0xc001100d
Mar 07 13:59:45 pve pvedaemon[5049]: <root@pam> starting task UPID:pve:0000212B:00715BA5:58BEAEC1:qmshutdown:101:root@pam:
Mar 07 13:59:45 pve pvedaemon[8491]: shutdown VM 101: UPID:pve:0000212B:00715BA5:58BEAEC1:qmshutdown:101:root@pam:
Mar 07 13:59:46 pve pvedaemon[5333]: <root@pam> successful auth for user 'root@pam'
Mar 07 13:59:47 pve kernel: vmbr0: port 2(tap101i0) entered disabled state
Mar 07 13:59:48 pve pveproxy[6491]: worker exit
Mar 07 13:59:48 pve pveproxy[12314]: worker 6491 finished
Mar 07 13:59:48 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 13:59:48 pve pveproxy[12314]: worker 8511 started
Mar 07 14:00:11 pve pvedaemon[5333]: <root@pam> update VM 101: -sockets 1 -numa 0 -cpu host -cores 2
Mar 07 14:00:46 pve pvedaemon[5049]: <root@pam> starting task UPID:pve:00002196:00717365:58BEAEFE:qmstart:101:root@pam:
Mar 07 14:00:46 pve pvedaemon[8598]: start VM 101: UPID:pve:00002196:00717365:58BEAEFE:qmstart:101:root@pam:
Mar 07 14:00:46 pve systemd[1]: Starting 101.scope.
Mar 07 14:00:46 pve systemd[1]: Started 101.scope.
Mar 07 14:00:47 pve kernel: device tap101i0 entered promiscuous mode
Mar 07 14:00:47 pve kernel: vmbr0: port 2(tap101i0) entered forwarding state
Mar 07 14:00:47 pve kernel: vmbr0: port 2(tap101i0) entered forwarding state
Mar 07 14:00:48 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 14:00:48 pve kernel: kvm: zapping shadow pages for mmio generation wraparound
Mar 07 14:00:58 pve kernel: kvm [8606]: vcpu0 unhandled rdmsr: 0xc0010048
Mar 07 14:00:58 pve kernel: kvm [8606]: vcpu1 unhandled rdmsr: 0xc0010048
Mar 07 14:01:03 pve kernel: kvm [8606]: vcpu0 unhandled rdmsr: 0x3a
Mar 07 14:01:03 pve kernel: kvm [8606]: vcpu0 unhandled rdmsr: 0xd90
Mar 07 14:01:03 pve kernel: kvm [8606]: vcpu0 unhandled rdmsr: 0xc0000103
Mar 07 14:06:03 pve pvedaemon[5049]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:07:16 pve pveproxy[6983]: worker exit
Mar 07 14:07:38 pve pvedaemon[4778]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:07:59 pve pvedaemon[9288]: starting vnc proxy UPID:pve:00002448:00721C53:58BEB0AE:vncshell::root@pam:
Mar 07 14:07:59 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:00002448:00721C53:58BEB0AE:vncshell::root@pam:
Mar 07 14:07:59 pve pvedaemon[9288]: launch command: /usr/bin/vncterm -rfbport 5900 -timeout 10 -authpath /nodes/pve -perm Sys.Console -notls -listen localhost -c /bin/bash -l
Mar 07 14:08:44 pve pvedaemon[5333]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:08:45 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:00002494:00722EA4:58BEB0DD:vncshell::root@pam:
Mar 07 14:08:45 pve pvedaemon[9364]: starting vnc proxy UPID:pve:00002494:00722EA4:58BEB0DD:vncshell::root@pam:
Mar 07 14:08:45 pve pvedaemon[9364]: launch command: /usr/bin/vncterm -rfbport 5901 -timeout 10 -authpath /nodes/pve -perm Sys.Console -notls -listen localhost -c /bin/bash -l
Mar 07 14:09:22 pve pvedaemon[5333]: <root@pam> starting task UPID:pve:000024D7:00723CD3:58BEB102:vncshell::root@pam:
Mar 07 14:09:22 pve pvedaemon[9431]: starting vnc proxy UPID:pve:000024D7:00723CD3:58BEB102:vncshell::root@pam:
Mar 07 14:09:22 pve pvedaemon[9431]: launch command: /usr/bin/vncterm -rfbport 5900 -timeout 10 -authpath /nodes/pve -perm Sys.Console -notls -listen localhost -c /bin/bash -l
Mar 07 14:10:28 pve pvedaemon[4778]: <root@pam> starting task UPID:pve:00002546:0072568B:58BEB144:vncshell::root@pam:
Mar 07 14:10:28 pve pvedaemon[9542]: starting vnc proxy UPID:pve:00002546:0072568B:58BEB144:vncshell::root@pam:
Mar 07 14:10:28 pve pvedaemon[9542]: launch command: /usr/bin/vncterm -rfbport 5901 -timeout 10 -authpath /nodes/pve -perm Sys.Console -notls -listen localhost -c /bin/bash -l
Mar 07 14:11:11 pve pveproxy[7619]: worker exit
Mar 07 14:11:11 pve pveproxy[12314]: worker 7619 finished
Mar 07 14:11:11 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 14:11:11 pve pveproxy[12314]: worker 9903 started
Mar 07 14:12:46 pve pveproxy[12314]: worker 8511 finished
Mar 07 14:12:46 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 14:12:46 pve pveproxy[12314]: worker 10399 started
Mar 07 14:12:48 pve pveproxy[10398]: got inotify poll request in wrong process - disabling inotify
Mar 07 14:13:03 pve pveproxy[8191]: worker exit
Mar 07 14:13:03 pve pveproxy[12314]: worker 8191 finished
Mar 07 14:13:03 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 14:13:03 pve pveproxy[12314]: worker 10444 started
Mar 07 14:14:47 pve pvedaemon[4778]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:15:41 pve pveproxy[10398]: worker exit
Mar 07 14:17:01 pve CRON[10848]: pam_unix(cron:session): session opened for user root by (uid=0)
Mar 07 14:17:01 pve CRON[10849]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Mar 07 14:17:01 pve CRON[10848]: pam_unix(cron:session): session closed for user root
Mar 07 14:19:57 pve pvedaemon[4778]: worker exit
Mar 07 14:19:57 pve pvedaemon[1285]: worker 4778 finished
Mar 07 14:19:57 pve pvedaemon[1285]: starting 1 worker(s)
Mar 07 14:19:57 pve pvedaemon[1285]: worker 11119 started
Mar 07 14:21:04 pve pvedaemon[5049]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:21:52 pve rrdcached[1137]: flushing old values
Mar 07 14:21:52 pve rrdcached[1137]: rotating journals
Mar 07 14:21:52 pve rrdcached[1137]: started new journal /var/lib/rrdcached/journal/rrd.journal.1488892912.369175
Mar 07 14:21:52 pve rrdcached[1137]: removing old journal /var/lib/rrdcached/journal/rrd.journal.1488885712.369206
Mar 07 14:21:54 pve smartd[1080]: Device: /dev/sda [SAT], SMART Usage Attribute: 195 Hardware_ECC_Recovered changed from 36 to 35
Mar 07 14:22:38 pve pvedaemon[11119]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:23:45 pve pvedaemon[11119]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:25:27 pve pvedaemon[1285]: worker 5333 finished
Mar 07 14:25:27 pve pvedaemon[1285]: starting 1 worker(s)
Mar 07 14:25:27 pve pvedaemon[1285]: worker 11615 started
Mar 07 14:25:28 pve pvedaemon[11613]: worker exit
Mar 07 14:29:46 pve pvedaemon[5049]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:30:04 pve pveproxy[10444]: worker exit
Mar 07 14:30:04 pve pveproxy[12314]: worker 10444 finished
Mar 07 14:30:04 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 14:30:04 pve pveproxy[12314]: worker 12015 started
Mar 07 14:31:12 pve pveproxy[10399]: worker exit
Mar 07 14:31:12 pve pveproxy[12314]: worker 10399 finished
Mar 07 14:31:12 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 14:31:12 pve pveproxy[12314]: worker 12118 started
Mar 07 14:31:15 pve systemd-timesyncd[775]: interval/delta/delay/jitter/drift 2048s/-0.009s/0.064s/0.034s/-88ppm
Mar 07 14:36:05 pve pvedaemon[11615]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:37:38 pve pvedaemon[11615]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:38:46 pve pvedaemon[11615]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:40:09 pve pvedaemon[5049]: worker exit
Mar 07 14:40:10 pve pvedaemon[1285]: worker 5049 finished
Mar 07 14:40:10 pve pvedaemon[1285]: starting 1 worker(s)
Mar 07 14:40:10 pve pvedaemon[1285]: worker 12923 started
Mar 07 14:42:41 pve pveproxy[9903]: worker exit
Mar 07 14:42:41 pve pveproxy[12314]: worker 9903 finished
Mar 07 14:42:41 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 14:42:41 pve pveproxy[12314]: worker 13151 started
Mar 07 14:44:47 pve pvedaemon[11615]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:46:14 pve pveproxy[12118]: worker exit
Mar 07 14:46:14 pve pveproxy[12314]: worker 12118 finished
Mar 07 14:46:14 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 14:46:14 pve pveproxy[12314]: worker 13464 started
Mar 07 14:51:06 pve pvedaemon[12923]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:52:39 pve pvedaemon[11119]: <root@pam> successful auth for user 'root@pam'
Mar 07 14:53:02 pve pveproxy[12015]: worker exit
Mar 07 14:53:02 pve pveproxy[12314]: worker 12015 finished
Mar 07 14:53:02 pve pveproxy[12314]: starting 1 worker(s)
Mar 07 14:53:02 pve pveproxy[12314]: worker 14071 started
Mar 07 14:53:47 pve pvedaemon[12923]: <root@pam> successful auth for user 'root@pam'

Vielen Dank im voraus!

Grüße
kcastner
 
Hi,

die kannst du ignorieren und ist harmlos.
Sie kommt wenn du 'host' als CPU type der KVM gibst.
 
Hallo,

ok dann ist das Problem gelöst :)
Ist es möglich diese Fehlermeldung zu unterdrücken?

Grüße
kcastner
 
Ja wie geschrienen.
Nicht 'host' als CPU type verwenden ;-)
 
I do have just kvm64 as CPU type and these messages too on proxmox 4.4
 
Hi immo,

please open a new thread.
And the Host CPU would be helpful.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!