Win2008 R2 turns off no reason - Linux OS fine

Bwalker

New Member
Mar 22, 2017
25
0
1
50
I have Promox 4.4-13/7ea56165 installed for couple weeks now on a

Supermicro AMD 6000 series
H8QGi-F

- It has all 4 cores, 256GB ECC RAM
- It has a 10GBE ISCSI to a SAN which holds VM OS Disks

I put a bunch of Ubuntu VMs on it and they have run flawless for 2 weeks straight.
I just put a Win2008R2 on it, and its been just turning OFF randomly every couple hours.

- I followed all the Promox guide to installing Win2008
- I started with Virtio disk and Network, I just changed them to IDE / Intel (but IDE gets 70% speed penalty)
- I turned off power saving, turned off Note on Reboot, installed Qemu agent
- I turned off Ballooning and gave it 16gigs ram
- It has 8 cores / 1 CPU
- Has the latest Windows Updates

When it happens I see no notices in the Proxmox control panel logs of any kind its just OFF. I can start it again no problem. Windows event viewer shows nothing about the crash I can see.

What am I missing? How or where would Proxmox show why its OFF?

After swapping to IDE/Intel I am now running a LOAD TEST for 15 mins at max CPU etc and it's still UP for now...

root@cloud1:~# pveversion -v
proxmox-ve: 4.4-84 (running kernel: 4.4.44-1-pve)
pve-manager: 4.4-13 (running version: 4.4-13/7ea56165)
pve-kernel-4.4.44-1-pve: 4.4.44-84
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.2-2~pve4+1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-109
pve-firmware: 1.1-10
libpve-common-perl: 4.0-94
libpve-access-control: 4.0-23
libpve-storage-perl: 4.0-76
pve-libspice-server1: 0.12.8-2
vncterm: 1.3-1
pve-docs: 4.4-3
pve-qemu-kvm: 2.7.1-4
pve-container: 1.0-96
pve-firewall: 2.0-33
pve-ha-manager: 1.0-40
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.7-4
lxcfs: 2.0.6-pve1
criu: 1.6.0-1
novnc-pve: 0.5-9
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.9-pve15~bpo80

if I look in dmseg I see some errors not sure what they relate to:

156.085514] kvm [2550]: vcpu0 unhandled rdmsr: 0xc001100d
[ 156.277022] kvm [2550]: vcpu1 unhandled rdmsr: 0xc001100d
[ 157.867827] kvm [2624]: vcpu0 unhandled rdmsr: 0xc001100d
[ 158.060401] kvm [2624]: vcpu1 unhandled rdmsr: 0xc001100d
[ 158.242241] device tap104i0 entered promiscuous mode
[ 158.262295] vmbr0: port 5(tap104i0) entered forwarding state
[ 158.262316] vmbr0: port 5(tap104i0) entered forwarding state
[ 160.778841] kvm [2682]: vcpu0 unhandled rdmsr: 0xc001100d
[ 162.682941] kvm: zapping shadow pages for mmio generation wraparound
[ 162.694188] kvm: zapping shadow pages for mmio generation wraparound
[ 169.777927] device tap108i0 entered promiscuous mode
[ 169.799082] vmbr0: port 6(tap108i0) entered forwarding state
[ 169.799103] vmbr0: port 6(tap108i0) entered forwarding state
[ 170.752349] device tap108i1 entered promiscuous mode
[ 170.773501] vmbr2: port 3(tap108i1) entered forwarding state
[ 170.773525] vmbr2: port 3(tap108i1) entered forwarding state
[ 172.013861] kvm: zapping shadow pages for mmio generation wraparound
[ 172.018002] kvm: zapping shadow pages for mmio generation wraparound
[ 189.321530] kvm [2824]: vcpu25 unhandled rdmsr: 0x3a
[ 189.321607] kvm [2824]: vcpu25 unhandled rdmsr: 0xd90
[ 189.321678] kvm [2824]: vcpu25 unhandled rdmsr: 0xc0000103

root@cloud1:~# qm config 104
agent: 1
balloon: 0
boot: c
bootdisk: ide0
cores: 8
hotplug: disk,network
ide0: QsKVM1:vm-104-disk-1,size=100G
ide1: QsKVM1:vm-104-disk-2,size=100G
memory: 16000
name: CanarcWin2008
net0: e1000=E6:EA:EB:20:A3:A4,bridge=vmbr0
numa: 0
onboot: 1
ostype: win7
scsihw: virtio-scsi-pci
smbios1: uuid=94dabc99-239f-4d6d-b702-bcc84b325301
sockets: 1
tablet: 0
 
I also installed Windows Server 2012 - same thing - it also turns off by itself - if both are running, it doesn't happen at the same time. It seems to happen about every 45 mins to 1 hour so far
 
Okay both of them are turning Off within about 1 hour and I even cloned one and ran it on a SSD instead of the ISCSI - made no difference.
 
Hi,

try to set VM cpu type to host.
 
Ok thanks Woflgang I have set both instance to HOST and now they report the AMD CPU when I view the System Settings.
As a side note is it safe for me to run the Virio-SCSI drivers?

For performance I get

IDE: 120MB (no cache)
Virtio: 650MB (no cache)
Virtio-Scsi: 2500MB (write back)

So I get pretty insane speeds when using Scsi Writeback. This is an ISCSI 10GBE to a SAN which has 64GB ECC Ram ZFS Raid-10, just wondering since most tutorials just say to use generic VIRTIO I think.
 
As a side note is it safe for me to run the Virio-SCSI drivers?
What do you mean safer?

Safe in case of data loose?
This depends on cache mode.
The bus makes no different.
 
Ok so I will test with or without SCSI I guess it all depends on CACHE not the BUS. Speed wise its 4x faster with the CACHE on but it also seems too fast. Not sure how risky the cache is , would it only apply to a full host power loss.
 
In the KVM tuning docs it says

QEMU also supports a wide variety of caching modes. If you're using raw volumes or partitions, it is best to avoid the cache completely, which reduces data copies and bus traffic:

qemu -drive file=/dev/mapper/ImagesVolumeGroup-Guest1,cache=none,if=virtio
 
Not sure how risky the cache is , would it only apply to a full host power loss.
Correct.
It depends on the importance of you data and your risk mindset.

If you need 100% safeness you should use sync but this is verry slow.
If this are a normal VM no cache is ok.
If it is a testing, developing write-back is ok.
 
Okay all 3 WIN VM are OFF again no reason, after same time about 1 hour :(

Status
stopped

2 x Win2008 R2
Win2012

with various settings for disk, lan, and all of them with CPU = HOST
Where can I see some kind of Log or anything to tell me why these are OFF

Cluster log - nothing

These are the only new logs from today in /var/log

rw-r----- 1 root adm 130488 Mar 23 02:15 debug
-rw-r----- 1 root adm 16187 Mar 23 02:17 auth.log
-rw-r----- 1 root adm 314112 Mar 23 02:45 daemon.log
-rw-r----- 1 root adm 101099 Mar 23 02:58 syslog
-rw-r----- 1 root adm 413457 Mar 23 02:58 messages
-rw-r----- 1 root adm 525789 Mar 23 02:58 kern.log

kernlog

Mar 23 02:13:15 cloud1 kernel: [111477.596606] kvm: zapping shadow pages for mmio generation wraparound
Mar 23 02:13:15 cloud1 kernel: [111477.611465] kvm: zapping shadow pages for mmio generation wraparound
Mar 23 02:13:28 cloud1 pvedaemon[36099]: <root@pam> starting task UPID:cloud1:00009F95:00AA1E23:58D391B8:vncproxy:105:root@pam:
Mar 23 02:13:43 cloud1 pvedaemon[35991]: <root@pam> successful auth for user 'root@pam'
Mar 23 02:14:21 cloud1 pvedaemon[36099]: <root@pam> end task UPID:cloud1:00009F95:00AA1E23:58D391B8:vncproxy:105:root@pam: OK
Mar 23 02:14:22 cloud1 pvedaemon[36099]: <root@pam> starting task UPID:cloud1:00009FF8:00AA32DC:58D391EE:vncproxy:104:root@pam:
Mar 23 02:23:24 cloud1 pvedaemon[36099]: <root@pam> end task UPID:cloud1:00009FF8:00AA32DC:58D391EE:vncproxy:104:root@pam: OK
Mar 23 02:23:25 cloud1 pvedaemon[35991]: <root@pam> starting task UPID:cloud1:0000A341:00AB06FA:58D3940D:vncproxy:105:root@pam:
Mar 23 02:25:14 cloud1 pvedaemon[35991]: <root@pam> end task UPID:cloud1:0000A341:00AB06FA:58D3940D:vncproxy:105:root@pam: OK
Mar 23 02:25:34 cloud1 pvedaemon[36099]: <root@pam> starting task UPID:cloud1:0000A404:00AB398D:58D3948E:vncproxy:105:root@pam:
Mar 23 02:28:43 cloud1 pvedaemon[37502]: <root@pam> successful auth for user 'root@pam'
Mar 23 02:31:05 cloud1 pvedaemon[42215]: <root@pam> starting task UPID:cloud1:0000A610:00ABBAC9:58D395D9:vncproxy:105:root@pam:
Mar 23 02:31:14 cloud1 pvedaemon[42215]: <root@pam> end task UPID:cloud1:0000A610:00ABBAC9:58D395D9:vncproxy:105:root@pam: OK
Mar 23 02:31:14 cloud1 pvedaemon[37502]: <root@pam> starting task UPID:cloud1:0000A62A:00ABBE7A:58D395E2:vncproxy:106:root@pam:
Mar 23 02:31:25 cloud1 pvedaemon[37502]: <root@pam> end task UPID:cloud1:0000A62A:00ABBE7A:58D395E2:vncproxy:106:root@pam: OK
Mar 23 02:32:06 cloud1 kernel: [112608.171250] vmbr0: port 8(tap105i0) entered disabled state
Mar 23 02:34:16 cloud1 kernel: [112737.878933] vmbr0: port 7(tap106i0) entered disabled state
Mar 23 02:43:44 cloud1 pvedaemon[37502]: <root@pam> successful auth for user 'root@pam'
Mar 23 02:49:13 cloud1 kernel: [113634.805096] vmbr0: port 5(tap104i0) entered disabled state
Mar 23 02:58:45 cloud1 pvedaemon[37502]: <root@pam> successful auth for user 'root@pam'

Syslog

Mar 23 02:25:28 cloud1 pvedaemon[2089]: worker 35991 finished
Mar 23 02:25:28 cloud1 pvedaemon[2089]: starting 1 worker(s)
Mar 23 02:25:28 cloud1 pvedaemon[2089]: worker 41981 started
Mar 23 02:25:34 cloud1 pvedaemon[36099]: <root@pam> starting task UPID:cloud1:0000A404:00AB398D:58D3948E:vncproxy:105:root@pam:
Mar 23 02:25:34 cloud1 pvedaemon[41988]: starting vnc proxy UPID:cloud1:0000A404:00AB398D:58D3948E:vncproxy:105:root@pam:
Mar 23 02:28:02 cloud1 pvedaemon[36099]: worker exit
Mar 23 02:28:02 cloud1 pvedaemon[2089]: worker 36099 finished
Mar 23 02:28:02 cloud1 pvedaemon[2089]: starting 1 worker(s)
Mar 23 02:28:02 cloud1 pvedaemon[2089]: worker 42215 started
Mar 23 02:28:43 cloud1 pvedaemon[37502]: <root@pam> successful auth for user 'root@pam'
Mar 23 02:30:07 cloud1 pveproxy[37751]: worker exit
Mar 23 02:30:07 cloud1 pveproxy[62125]: worker 37751 finished
Mar 23 02:30:07 cloud1 pveproxy[62125]: starting 1 worker(s)
Mar 23 02:30:07 cloud1 pveproxy[62125]: worker 42421 started
Mar 23 02:31:05 cloud1 pvedaemon[42215]: <root@pam> starting task UPID:cloud1:0000A610:00ABBAC9:58D395D9:vncproxy:105:root@pam:
Mar 23 02:31:05 cloud1 pvedaemon[42512]: starting vnc proxy UPID:cloud1:0000A610:00ABBAC9:58D395D9:vncproxy:105:root@pam:
Mar 23 02:31:14 cloud1 pvedaemon[42215]: <root@pam> end task UPID:cloud1:0000A610:00ABBAC9:58D395D9:vncproxy:105:root@pam: OK
Mar 23 02:31:14 cloud1 pvedaemon[37502]: <root@pam> starting task UPID:cloud1:0000A62A:00ABBE7A:58D395E2:vncproxy:106:root@pam:
Mar 23 02:31:14 cloud1 pvedaemon[42538]: starting vnc proxy UPID:cloud1:0000A62A:00ABBE7A:58D395E2:vncproxy:106:root@pam:
Mar 23 02:31:25 cloud1 pvedaemon[37502]: <root@pam> end task UPID:cloud1:0000A62A:00ABBE7A:58D395E2:vncproxy:106:root@pam: OK
Mar 23 02:32:06 cloud1 kernel: [112608.171250] vmbr0: port 8(tap105i0) entered disabled state
Mar 23 02:33:00 cloud1 systemd-timesyncd[1353]: interval/delta/delay/jitter/drift 2048s/+0.000s/0.075s/0.000s/-34ppm
Mar 23 02:34:16 cloud1 kernel: [112737.878933] vmbr0: port 7(tap106i0) entered disabled state
Mar 23 02:43:44 cloud1 pvedaemon[37502]: <root@pam> successful auth for user 'root@pam'
Mar 23 02:45:24 cloud1 pveproxy[40739]: worker exit
Mar 23 02:45:24 cloud1 pveproxy[62125]: worker 40739 finished
Mar 23 02:45:24 cloud1 pveproxy[62125]: starting 1 worker(s)
Mar 23 02:45:24 cloud1 pveproxy[62125]: worker 43845 started
Mar 23 02:49:13 cloud1 kernel: [113634.805096] vmbr0: port 5(tap104i0) entered disabled state
Mar 23 02:58:45 cloud1 pvedaemon[37502]: <root@pam> successful auth for user 'root@pam'
root@cloud1:/var/log#
 
Last edited:
Try to run the kvm process in foreground.

take the output from qm showconfig <VMID>
then remove id and daemonize from the string and run it on the shell

when it crash you get an output hopefully.
 
I think you mean showcmd not config

root@cloud1:~# qm showcmd 104
/usr/bin/kvm -id 104 -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -daemonize -smbios 'type=1,uuid=94dabc99-239f-4d6d-b702-bcc84b325301' -name CanarcWin2008 -smp '8,sockets=1,cores=8,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/104.vnc,x509,password -no-hpet -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed' -m 16032 -k en-us -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -chardev 'socket,path=/var/run/qemu-server/104.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:1db3f05c2956' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/QsKVM/vm-104-disk-1,if=none,id=drive-scsi0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0' -drive 'file=/dev/QsKVM/vm-104-disk-2,if=none,id=drive-scsi1,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=E6:EA:EB:20:A3:A4,netdev=net0,bus=pci.0,addr=0x12,id=net0' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'
 
Okay I modified the command to run in foreground: too out 104 and Daemonize

/usr/bin/kvm -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -smbios 'type=1,uuid=94dabc99-239f-4d6d-b702-bcc84b325301' -name CanarcWin2008 -smp '8,sockets=1,cores=8,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/104.vnc,x509,password -no-hpet -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed' -m 16032 -k en-us -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -chardev 'socket,path=/var/run/qemu-server/104.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:1db3f05c2956' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/QsKVM/vm-104-disk-1,if=none,id=drive-scsi0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0' -drive 'file=/dev/QsKVM/vm-104-disk-2,if=none,id=drive-scsi1,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=E6:EA:EB:20:A3:A4,netdev=net0,bus=pci.0,addr=0x12,id=net0' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'

one strange thing is that in /dev/QSKVM/ the disks for 104 are not there if I start and then shutdown the VM the disks vanish and I cannot start it, I have to edit the disks in the UI by adding cache etc for them to show up in the folder, then I can run it.
 
I ran if for 1 hour and same thing no output of any kind

root@cloud1:~# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
100 Tsuru1 running 8000 100.00 2550
101 Sasamat running 2048 100.00 2624
102 mms stopped 64000 2000.00 0
103 ProxyVitality running 512 15.00 2682
104 CanarcWin2008 running 16032 0.00 46270
105 CanarcSQL2008 stopped 8000 100.00 0
106 Stratuscore1 stopped 8192 100.00 0
107 Snowstorm stopped 8000 100.00 0
108 Gitlab-runner running 8000 400.00 2824

Was running

Just stopped no output in shell

root@cloud1:/dev/QsKVM# /usr/bin/kvm -chardev 'socket,id=qmp,path=/var/run/qemu-server/104.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/104.pid -smbios 'type=1,uuid=94dabc99-239f-4d6d-b702-bcc84b325301' -name CanarcWin2008 -smp '8,sockets=1,cores=8,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/104.vnc,x509,password -no-hpet -cpu 'host,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed' -m 16032 -k en-us -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -chardev 'socket,path=/var/run/qemu-server/104.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:1db3f05c2956' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/QsKVM/vm-104-disk-1,if=none,id=drive-scsi0,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0' -drive 'file=/dev/QsKVM/vm-104-disk-2,if=none,id=drive-scsi1,cache=writeback,format=raw,aio=threads,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' -netdev 'type=tap,id=net0,ifname=tap104i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=E6:EA:EB:20:A3:A4,netdev=net0,bus=pci.0,addr=0x12,id=net0' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'
root@cloud1:/dev/QsKVM#

root@cloud1:/dev/QsKVM# qm list
VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID
100 Tsuru1 running 8000 100.00 2550
101 Sasamat running 2048 100.00 2624
102 mms stopped 64000 2000.00 0
103 ProxyVitality running 512 15.00 2682
104 CanarcWin2008 stopped 16032 100.00 0
105 CanarcSQL2008 stopped 8000 100.00 0
106 Stratuscore1 stopped 8192 100.00 0
107 Snowstorm stopped 8000 100.00 0
108 Gitlab-runner running 8000 400.00 2824
 
Anyway to get more detailed logs? It's driving me nuts and I need to have this up before Monday or this client which is our first potential client for Proxmox will cancel and buy dedicated server instead if we fail this POC for them.
 
This Windows VM are plain new installed machines or they are migrated?
 
Okay I think I figured it out, since it appears to happen after 1 hour - I googled and find out that even with 180 days trail on Server2008 you must ACTIVATE this or it will shutdown every hour after 10 days.

Since my VM seems to run excellent I am very sure this is the issue - I just Activated my windows server and I hope this fixes it. Seems to make sense.

Maybe you could add this to your BEST PRACTICE documents and make a note to ACTIVATE the server right away or it will reboot every hour with no logs or warnings.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!