[SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

Too late for us too, we triggered the bug with live backups of our Windows Server VMs. All 3 had their NIC reset, it was fun... And just the day we switched to Enterprise repository too... That is probably a dumb question, but should the no-subscription repository be disabled when one uses the enterprise repo? (we did).
 
Last edited:
Just wanted to add a "me too". Already triggered and found out late Friday afternoon and had scrambled to correct the NICS and remove the ghost missing nics. Another side effect is that some servers are not able to install the latest qemu-guest-agent in Windows. The MSI complains there was a problem and will not install. Linux servers are not affected at all only Windows. Not sure if I should create a new thread specific to this so I'll leave this here to see if it gets any bites.
 
Im not sure if it has something todo with this, but i noticed today, that I can't see anymore the disk usage in the proxmox gui from the qemu vm's.

Did that even worked before? Sorry didn't paid attention xD
 
That is probably a dumb question, but should the no-subscription repository be disabled when one uses the enterprise repo? (we did).

yes, when you have a subscription you should only have the stock Debian + PVE enterprise repository enabled (except for testing purposes obviously).
 
FYI: I found the problematic commit, which is sadly not a real bug but a correction to ACPI devices UIDs to make QEMU behave more standard conform. As its late Saturday and such things are not those which can be decided alone I cannot make any final solution, rather I've informed the original author of the patch and some other QEMU and EDK2 maintainers with good in-depth knowledge of all pieces involved (which is mostly firmware like SeaBIOS/OVMF and QEMU) so that we can get to a sane, future proof solution, if there's any, or a workaround/better upgrade to this behaviour in a more defined manner.

Until then, if you want to stay on QEMU 5.2 you can use the following package I build, it has only that commit reverted and makes my reproducer happy again (note, the other new "ghost" device won't go away, but the original Ethernet adapter will be in use again, from Windows POV).

You can download and install that build by doing:

Bash:
wget http://download.proxmox.com/temp/pve-qemu-5.2-with-acpi-af1b80ae56-reverted/pve-qemu-kvm_5.2.0-2%2B1~windevfix_amd64.deb

# verify check sum:
sha256sum pve-qemu-kvm_5.2.0-2+1\~windevfix_amd64.deb
33e8ce10b5a4005160c68f79c979d53b1a84a1d79436adbd00c48ec93d3bf1de  pve-qemu-kvm_5.2.0-2+1~windevfix_amd64.deb

# install it
apt install ./pve-qemu-kvm_5.2.0-2+1~windevfix_amd64.deb

After installation do a fresh boot of the VM, if unsure shut it down completely and then start it through the Proxmox VE webinterface again.
First sorry for my last answer! Amazing work you're doing here...

How should I apply that Patch, the next step is to reboot my Proxmox.

Should i run these commands now, reboot Proxmox and than start my VMs
Or reboot Proxmox, run that commands and start my VMs than
Or reboot Proxmox, start my VMs, run that commands, reboot my VMs?

Thanks :)
 
First sorry for my last answer! Amazing work you're doing here...

How should I apply that Patch, the next step is to reboot my Proxmox.

Should i run these commands now, reboot Proxmox and than start my VMs
Or reboot Proxmox, run that commands and start my VMs than
Or reboot Proxmox, start my VMs, run that commands, reboot my VMs?

Thanks :)
No reboot of the Proxmox VE host itself required.
Just install the package and restart the affected VMs (shutdown+start from the Proxmox VE webinterface)
 
  • Like
Reactions: L1243
No reboot of the Proxmox VE host itself required.
Just install the package and restart the affected VMs (shutdown+start from the Proxmox VE webinterface)
With reboot i mean the reboot for the Proxmox Update

I updated the Proxmox days ago but haven't rebooted it yet so the new Versions aren't applied
 
With reboot i mean the reboot for the Proxmox Update

I updated the Proxmox days ago but haven't rebooted it yet so the new Versions aren't applied
Note that most changes are directly applied and the affected daemons restarted. There are exceptions though, like a kernel update needs a full reboot to load the new one and a QEMU one needs a migration to another node with that update already installed or a fresh start of that VM to run with the new version.
 
Im not sure if it has something todo with this, but i noticed today, that I can't see anymore the disk usage in the proxmox gui from the qemu vm's.

Did that even worked before? Sorry didn't paid attention xD
If your talking about the Disk IO Graphs, they appear to be working for me.
 
  • Like
Reactions: Ramalama
HI,
I'm also run in this problem and post it herer:
https://forum.proxmox.com/threads/a...cart-on-windows-changed-vm-are-offline.85330/
there are some unanswered question for me, namly: what is to do, if you have workaround the problem by configuration the new NIC and so on, after installing the patch? I mean not the reboot thinks, I mean the configuration, for example: have I to clean some pci devices / NICs now, is also the configuration back after the patch? and/or get you now again a conflict with correct and wrong NICs like before vice versa?

EDIT:
And the next question is: what is with the pve installation, last version :
Code:
proxmox-ve: 6.3-1 (running kernel: 5.4.78-2-pve)
pve-manager: 6.3-3 (running version: 6.3-3/eee5f901)
pve-kernel-5.4: 6.3-3
pve-kernel-helper: 6.3-3
pve-kernel-5.4.78-2-pve: 5.4.78-2
.....
pve-qemu-kvm: 5.1.0-8
......
what is here the next step? I'm sure waiting for the next update/kernel versions is a good idea, but something else?

EDIT:
@t.lamprecht
Thank You for Post #52
regards
 
Last edited:
FYI, there are now package updates on the pvetest package repository addressing this in some ways.
  1. You can now set a specific machine version from the webinterface
    1614982994506.png
    note: this would not have helped for this specific issue, as upstream QEMU forgot to track this problematic change in the backward compatibility code, which brings me to...
  2. QEMU machine versions 5.1 and older are now restored to their old behaviour regarding PCI and ACPI UIDs. QEMU 5.2 and newer will use the new layout as it is actually standard conform and some OS may require it for boot order changes (FWICT this is mostly macOS now, which is not yet fully supported to run in a hypervisor by Apple).
    But this point is only helpful as long as the specific machine version is pinned in the VM config, which brings me to...
  3. We will pin machine versions for Windows VMs, Windows is far too picky and indeterministic regarding such changes, sometimes a lot can change, and it won't matter a lot, and sometimes a little detail change breaks boot.
    VMs which have no specific machine version set in the config will stay at 5.1, you will be able to update this at anytime in the webinterface with the new option (see point 1 above) and do a VM restart to apply it (VM migrations and rollback of live-snapshots will continue to always use the current running version, no matter what).
    For Linux VMs you can keep the unversioned machine type too, this will then always start with the current latest machine version.
You need to update at least the following set of packages to get above fixes/features:

Code:
pve-manager: 6.3-5
pve-qemu-kvm: 5.2.0-3
qemu-server: 6.3-6

NOTE: Only available on the test repository, we'll further test this until around the middle of next week, if all seems OK we'll move the updates to no-subscription (I'll update this post then).

For sake of completeness some note which will apply only once we moved this to no-subscription or you install it from pvetest: for people on pve-subscription which either adapted their Windows VMs to the changes or installed new Windows VMs with the 5.2.0-2 QEMU version, you can set the machine type (q35 or pc-i440fx) to 5.2.
People which have older VMs and update should not have to do anything, their old VM will be pinned on the 5.1 machine version, which is again backward compatible and new VMs with OS-type Windows will be pinned to whatever is the current machine version at time of creation, at least once those updates got released to no-subscription (they are NOT yet).
 
Last edited:
Thank you :)
It's clear.
I'll wait for the "no-subscription" release, one surprise is enough for me. ;)
 
Last edited:
FYI: Those fixes mentioned in my previous post are now out on no-subscription.
Please keep my post above and in general the following two notes from there in mind:
for people on pve-subscription which either adapted their Windows VMs to the changes or installed new Windows VMs with the 5.2.0-2 QEMU version: you should set the machine type (q35 or pc-i440fx) to 5.2.

People which have older VMs and update should not have to do anything, their old VM will be pinned on the 5.1 machine version, which is again backward compatible and new VMs with OS-type Windows will be pinned to whatever is the current machine version at time of creation, at least once those updates got released to no-subscription (they are NOT yet).
 
  • Like
Reactions: XoCluTch
Hello,
i have problems with VMs OS Windows 10 20H2 and Windows 2019 with NICs which was changed
As you explain we must pick pc-i440fx to version 5.1 !?, but some my machine already do picking that setup by itself and problem still exists.
Which version do we need to choose 5.1 or 5.0 or some else?
 
Last edited:
As you explain we must pick pc-i440fx to version 5.1 !?, but some my machine already do picking that setup by itself and problem still exists.
If the VM was installed under QEMU 5.2 or if it was adapted to the changes you need to pin it to 5.2, if it was not adapted and is an older VM then 5.1 should be correct.

Post your pveversion -v output and check the device manager (enable Show hidden devices in the View menu) if there are actually some extra hidden network devices.
 
If the VM was installed under QEMU 5.2 or if it was adapted to the changes you need to pin it to 5.2, if it was not adapted and is an older VM then 5.1 should be correct.

Post your pveversion -v output and check the device manager (enable Show hidden devices in the View menu) if there are actually some extra hidden network devices.
Hello,

The print screen attached bellow.
 

Attachments

  • pm0.png
    pm0.png
    43.4 KB · Views: 64
  • winServ2019.png
    winServ2019.png
    37 KB · Views: 70
  • pm0-1.png
    pm0-1.png
    49.8 KB · Views: 67
Last edited:
HI,
my solution on Windows Severs witch run in the problems with changed PCI IDs ....
- First an Important: I don't work remote, I work with the Proxmox GUI, the VM can be offline..
- make a snapshot or an vzdump ;-)
- Upgrade proxmox to the last Version, with the pinning to the installed version = 5.1
- apt update && apt upgrade; with a new Kernel, a proxmox reboot is necessary
- Windows VM: open the device manager (devmgmt.msc) ; Click Show Hidden Devices.
- I delete all unused hidden Devices
- install the latest stable virtio drivers, Version x.x.181
- reboot the Windows
- modify and correct the NIC Settings to right IP Settings
- reboot the Windows
- If everything work, delete the last snapshot or vzdump and make new ones

regards
 
Last edited:
this bug still exists in proxmox 6.4.13 after update , the solution presenterd here is still working in this version
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!