After migration of Windows Servers 2019 a reboot is needed

ednt

Well-Known Member
Mar 16, 2017
99
7
48
Always when we migrate Windows Servers they stuck with an error screen.
After a reboot they work normal again.

Today I migrated some VMs to an other node and back to update the pbs library which is used by the VMs.
All 4 VMs needed a restart afterwards.
It was a DomainController, 2 ExchangeSever, a RDP Gateway.
After migration CPU load was 100%.

1659954674859.png

1659954995110.png

QEMU Guest additions are intalled inside of the VMs and the option in Proxmox is enabled.

PVE version is 7.2.7

If I migrate Debian QMs it works always.

Any ideas?

Best regards
 
Last edited:
Some new tests:
The migration to cpu11 was working. The problem starts when migrating back to cpu02.

The difference:
cpu02 uses openvswitch, while cpu11 didn't.

It looks like that the migration from a node without openvswitch to a node with openvswitch results in this behaviour.
The interfaces have the same names of course.

I avoid openvswitch on new nodes, because after a debian update of openvswitch all VMs on such nodes where without network.
an ifreload -a on the PVE nodes was needed. Horrible!
This was only resolvable via iLO of the Servers. Else a longer downtime were needed.
 
Ok ....

I removed the openvswitch stuff from one node and tried it again.
Unfortunately this doesn't solve the problem.

To be more precise:

If I move a windows server from an old node to an old node and back, it works.
If I move a windows server from an old node to a new node it works.
But if I move it back to the old node I get a blue screen with:
unexpected kernel mode trap

Old and new nodes are running the same proxmox versions.
Old means they were in the original cluster and new means the nodes were added later (8 weeks ago) to the cluster.

We also removed the network interfaces from a test windows server, but still the same.
We can not find any difference (Ok, the hardware is different: old are HP G8, new are HP G9)

Migration always says TASK OK, syslog also shows nothing about this behaviour.

At the moment I'm very unsure what I can migrate from where to where without trouble.
That's not funny. :(
 
Last edited:
Yes, you were right.
I needed to boot both nodes with 5.13.xx to be able to migrate from and to without problems.
A very ugly problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!