[SOLVED] Windows Server 2019 VM Constantly Rebooting

gentech

Member
Dec 2, 2021
4
1
8
27
Hi PROXMOX community,

Happy you exist. I am pretty much a complete noob here with proxmox, so any help would be greatly appreciated. I've done research and troubleshooting for the past 3 days and I can't figure out this issue. I will provide as much context as I can below.

General Description:
I am running PROXMOX VE on a custom built computer. I have only 2 VMs. One VM is running Windows Server 2019 and it's hosting a Papercut Printing server for a LAN. The plain and simple description of the issue is, about every hour, the VM just completely stops. I open Proxmox, hit start, it fires back up, and everything is peachy for about an hour until it does the exact same thing again.

Specific Troubleshooting Considerations:

Proxmox VE Errors Thrown:
- There is one error being thrown by Proxmox on this VM and it is "Error: Failed to run vncproxy." Here are some screenshots related to this error:
Errorfailedtorunvncproxy.png

errorfailedtorunvncproxystatus.png


Event Viewer Errors: When I look through the Windows Server Event Viewer there are 2 primary event viewer errors that are coming up fairly consistently.

  1. Event 124, Kernel-boot. "The virtualization-based security enablement policy check at phase 0 failed with status: The request is not supported"
  2. Event 41, Hyper-V-Hypervisor. "Hypervisor launch failed; Either VMX not present or not enabled in BIOS."

Device Manager Observations: I also noticed there are a few things in the device manager that could be contributing factors. I tried asking the VM To update these drivers but I'm not entirely sure how that works or what's going on there exactly.

1. Other PCI Device & PCI Simple Communications Controller Devices Seem to Be Uninstalled?
devicemanagerotherpcidevice.png
2. Hyper-V Bus Driver Seems to be not working or something?
hypervdriver.png

Any help the community might be able to provide me with this issue will help an entire non-profit campus with their printing woes.


I have spent a quite a bit of time researching this thus far to no avail. Some research I did said it could be hardware related and to check temperatures and such, those seem to be fine and my other VM is not experiencing any issues. Others said it had something to do with registry permissions, I tried editing those and making sure the Windows Server user was able to perform all of the necessary activities to launch things. Issue persisted.

I appreciate any responses you might offer in advance.
Thank you,
Ruach
 
Last edited:
Hello,

What is the PVE version pveversion -v? only VM 100 has the issue? please post the VM config qm config 100.
Did you see anything in the syslog/journalctl?
 
Hello,

What is the PVE version pveversion -v? only VM 100 has the issue? please post the VM config qm config 100.
Did you see anything in the syslog/journalctl?
Hi Moayad, thank you so much for your reply! Here is the information requested:

pveversion-v yields:
proxmox-ve: 6.3-1 (running kernel: 5.4.106-1-pve)

Yes, only VM 100 has the issue.

qm config 100 yields:

root@pve:~# qm config 100
agent: 1
boot: order=scsi0;ide2;net0
cores: 3
ide2: local:iso/Windows2019.iso,media=cdrom
ide3: local:iso/virtio-win-0.1.185.iso,media=cdrom,size=402812K
machine: pc-i440fx-5.2
memory: 11264
name: PaperCutWinSvr2019
net0: e1000=C6:F9:50:4D:1C:8A,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: win10
scsi0: local-zfs:vm-100-disk-0,discard=on,size=400G
scsihw: virtio-scsi-pci
smbios1: uuid=ca4e587c-c786-4f39-976a-c2aa0c7b4a88
sockets: 1
vmgenid: 811c7078-2dbf-4c1a-8bf5-650f3aac8dd8


I pasted in a few snapshots of yesterday's log below starting around 8am as I was constantly going in and starting the VM after it would crash.

I noticed a few things in the logs:
1. A few temperature logs, but they don't seem to indicate any failures?
2. There is an autonegotiation warning thrown when I try to boot VM100
3. "Could not generate persistent MAC address for fwln100i0: No such file or directory" but I'm not sure exactly what that means -- maybe something became corrupted or was deleted by accident?
4. There's also ports entering a "Blocking state" which I'm not sure what that means exactly.
5. A few "failed to run vncproxy" errors also.

Dec 02 08:37:01 pve systemd[1]: Started Proxmox VE replication runner.
Dec 02 08:37:05 pve pvedaemon[1846]: <root@pam> successful auth for user 'root@pam'
Dec 02 08:37:05 pve pvedaemon[23957]: <root@pam> starting task UPID:pve:000035A6:1ACC3350:61A8DA11:vncproxy:101:root@pam:
Dec 02 08:37:05 pve pvedaemon[13734]: starting vnc proxy UPID:pve:000035A6:1ACC3350:61A8DA11:vncproxy:101:root@pam:
Dec 02 08:37:07 pve pvedaemon[23957]: <root@pam> end task UPID:pve:000035A6:1ACC3350:61A8DA11:vncproxy:101:root@pam: OK
Dec 02 08:37:08 pve pvedaemon[13758]: starting vnc proxy UPID:pve:000035BE:1ACC344F:61A8DA14:vncproxy:100:root@pam:
Dec 02 08:37:08 pve pvedaemon[23957]: <root@pam> starting task UPID:pve:000035BE:1ACC344F:61A8DA14:vncproxy:100:root@pam:
Dec 02 08:37:08 pve qm[13760]: VM 100 qmp command failed - VM 100 not running
Dec 02 08:37:08 pve pvedaemon[13758]: Failed to run vncproxy.
Dec 02 08:37:08 pve pvedaemon[23957]: <root@pam> end task UPID:pve:000035BE:1ACC344F:61A8DA14:vncproxy:100:root@pam: Failed to run vncproxy.
Dec 02 08:37:08 pve pvedaemon[12595]: <root@pam> starting task UPID:pve:000035C1:1ACC34A0:61A8DA14:qmstart:100:root@pam:
Dec 02 08:37:08 pve pvedaemon[13761]: start VM 100: UPID:pve:000035C1:1ACC34A0:61A8DA14:qmstart:100:root@pam:
Dec 02 08:37:08 pve systemd[1]: Started 100.scope.
Dec 02 08:37:08 pve systemd-udevd[13771]: Using default interface naming scheme 'v240'.
Dec 02 08:37:08 pve systemd-udevd[13771]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 08:37:08 pve systemd-udevd[13771]: Could not generate persistent MAC address for tap100i0: No such file or directory
Dec 02 08:37:09 pve kernel: device tap100i0 entered promiscuous mode
Dec 02 08:37:09 pve systemd-udevd[13771]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 08:37:09 pve systemd-udevd[13771]: Could not generate persistent MAC address for fwbr100i0: No such file or directory
Dec 02 08:37:09 pve systemd-udevd[13769]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 08:37:09 pve systemd-udevd[13770]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 08:37:09 pve systemd-udevd[13770]: Using default interface naming scheme 'v240'.
Dec 02 08:37:09 pve systemd-udevd[13769]: Using default interface naming scheme 'v240'.
Dec 02 08:37:09 pve systemd-udevd[13770]: Could not generate persistent MAC address for fwln100i0: No such file or directory
Dec 02 08:37:09 pve systemd-udevd[13769]: Could not generate persistent MAC address for fwpr100p0: No such file or directory
Dec 02 08:37:09 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Dec 02 08:37:09 pve kernel: fwbr100i0: port 1(fwln100i0) entered disabled state
Dec 02 08:37:09 pve kernel: device fwln100i0 entered promiscuous mode
Dec 02 08:37:09 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Dec 02 08:37:09 pve kernel: fwbr100i0: port 1(fwln100i0) entered forwarding state
Dec 02 08:37:09 pve kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Dec 02 08:37:09 pve kernel: vmbr0: port 2(fwpr100p0) entered disabled state
Dec 02 08:37:09 pve kernel: device fwpr100p0 entered promiscuous mode
Dec 02 08:37:09 pve kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Dec 02 08:37:09 pve kernel: vmbr0: port 2(fwpr100p0) entered forwarding state
Dec 02 08:37:09 pve kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Dec 02 08:37:09 pve kernel: fwbr100i0: port 2(tap100i0) entered disabled state
Dec 02 08:37:09 pve kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Dec 02 08:37:09 pve kernel: fwbr100i0: port 2(tap100i0) entered forwarding state
Dec 02 08:37:09 pve pvedaemon[12595]: <root@pam> end task UPID:pve:000035C1:1ACC34A0:61A8DA14:qmstart:100:root@pam: OK
Dec 02 08:37:09 pve pvedaemon[12595]: <root@pam> starting task UPID:pve:00003605:1ACC34DA:61A8DA15:vncproxy:100:root@pam:
Dec 02 08:37:09 pve pvedaemon[13829]: starting vnc proxy UPID:pve:00003605:1ACC34DA:61A8DA15:vncproxy:100:root@pam:

Dec 02 09:37:25 pve pvedaemon[8825]: <root@pam> end task UPID:pve:0000368A:1ACEC3EB:61A8E0A2:vncproxy:100:root@pam: OK
Dec 02 09:37:25 pve qmeventd[9592]: Starting cleanup for 100
Dec 02 09:37:25 pve qmeventd[9592]: Finished cleanup for 100
Dec 02 09:37:25 pve pvedaemon[12595]: <root@pam> starting task UPID:pve:00004537:1AD1B96D:61A8E835:vncproxy:100:root@pam:
Dec 02 09:37:25 pve pvedaemon[17719]: starting vnc proxy UPID:pve:00004537:1AD1B96D:61A8E835:vncproxy:100:root@pam:
Dec 02 09:37:26 pve qm[17721]: VM 100 qmp command failed - VM 100 not running
Dec 02 09:37:26 pve pvedaemon[17719]: Failed to run vncproxy.
Dec 02 09:37:26 pve pvedaemon[12595]: <root@pam> end task UPID:pve:00004537:1AD1B96D:61A8E835:vncproxy:100:root@pam: Failed to run vncproxy.
Dec 02 09:38:00 pve systemd[1]: Starting Proxmox VE replication runner...
Dec 02 09:38:01 pve systemd[1]: pvesr.service: Succeeded.
Dec 02 09:38:01 pve systemd[1]: Started Proxmox VE replication runner.
Dec 02 09:38:20 pve pvedaemon[12595]: <root@pam> successful auth for user 'root@pam'
Dec 02 09:39:00 pve systemd[1]: Starting Proxmox VE replication runner...
Dec 02 09:39:01 pve systemd[1]: pvesr.service: Succeeded.
Dec 02 09:39:01 pve systemd[1]: Started Proxmox VE replication runner.
Dec 02 09:39:29 pve pveproxy[10875]: worker exit
Dec 02 09:39:29 pve pveproxy[10165]: worker 10875 finished
Dec 02 09:39:29 pve pveproxy[10165]: starting 1 worker(s)
Dec 02 09:39:29 pve pveproxy[10165]: worker 18755 started
Dec 02 09:40:00 pve systemd[1]: Starting Proxmox VE replication runner...
Dec 02 09:40:01 pve systemd[1]: pvesr.service: Succeeded.
Dec 02 09:40:01 pve systemd[1]: Started Proxmox VE replication runner.
Dec 02 09:41:00 pve systemd[1]: Starting Proxmox VE replication runner...
Dec 02 09:41:01 pve systemd[1]: pvesr.service: Succeeded.
Dec 02 09:41:01 pve systemd[1]: Started Proxmox VE replication runner.
Dec 02 09:42:00 pve systemd[1]: Starting Proxmox VE replication runner...
Dec 02 09:42:01 pve systemd[1]: pvesr.service: Succeeded.
Dec 02 09:42:01 pve systemd[1]: Started Proxmox VE replication runner.
Dec 02 09:42:01 pve pvedaemon[12595]: <root@pam> starting task UPID:pve:00004E1F:1AD2255B:61A8E949:qmstart:100:root@pam:
Dec 02 09:42:01 pve pvedaemon[19999]: start VM 100: UPID:pve:00004E1F:1AD2255B:61A8E949:qmstart:100:root@pam:
Dec 02 09:42:01 pve systemd[1]: Started 100.scope.
Dec 02 09:42:01 pve systemd-udevd[19970]: Using default interface naming scheme 'v240'.
Dec 02 09:42:01 pve systemd-udevd[19970]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 09:42:01 pve systemd-udevd[19970]: Could not generate persistent MAC address for tap100i0: No such file or directory
Dec 02 09:42:02 pve kernel: device tap100i0 entered promiscuous mode
Dec 02 09:42:02 pve systemd-udevd[19970]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 09:42:02 pve systemd-udevd[19970]: Could not generate persistent MAC address for fwbr100i0: No such file or directory
Dec 02 09:42:02 pve systemd-udevd[20007]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 09:42:02 pve systemd-udevd[20007]: Using default interface naming scheme 'v240'.
Dec 02 09:42:02 pve systemd-udevd[20008]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 02 09:42:02 pve systemd-udevd[20007]: Could not generate persistent MAC address for fwpr100p0: No such file or directory
Dec 02 09:42:02 pve systemd-udevd[20008]: Using default interface naming scheme 'v240'.
Dec 02 09:42:02 pve systemd-udevd[20008]: Could not generate persistent MAC address for fwln100i0: No such file or directory
Dec 02 09:42:02 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Dec 02 09:42:02 pve kernel: fwbr100i0: port 1(fwln100i0) entered disabled state
Dec 02 09:42:02 pve kernel: device fwln100i0 entered promiscuous mode
Dec 02 09:42:02 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Dec 02 09:42:02 pve kernel: fwbr100i0: port 1(fwln100i0) entered forwarding state
Dec 02 09:42:02 pve kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Dec 02 09:42:02 pve kernel: vmbr0: port 2(fwpr100p0) entered disabled state
Dec 02 09:42:02 pve kernel: device fwpr100p0 entered promiscuous mode


Thank you for your guidance and support in advance!
 
Check if your Windows Server 2019 is licensed and activated:



1 hour is the default shutdown period for an unlicensed Windows Server 2019 after its initial 180 days evaluation period.
Oh. My. Goodness. I swear if that's the issue I'm going to feel like such an idiot LOL. Checking now.
 
Check if your Windows Server 2019 is licensed and activated:



1 hour is the default shutdown period for an unlicensed Windows Server 2019 after its initial 180 days evaluation period.
Victor, my friend, thank you!! It's been up for 4 hours now. What a stupid thing to have happen lol!
 
  • Like
Reactions: VictorSTS