[SOLVED] [Warning] Latest Patch just broke all my WINDOWS vms. (6.3-4) [Patch Inside] (See #53)

XoCluTch

Active Member
Jul 29, 2018
24
3
43
40
PVE Manager Version

pve-manager/6.3-4/0a38c56f


Appears all the network adaptors have been replaced, causing my static ips to no longer be set.


… will add more info in a min




--------------------------------------------------------------------------------------------------------
Update:


For Solution See Post #53 of this thread:
https://forum.proxmox.com/threads/w...dows-vms-6-3-4-patch-inside.84915/post-374993
 
Last edited:
  • Like
Reactions: mhaluska
i was thinking it was the openvswitch that changed it... I had to re-configured the network interfaces with static ips... I'm not sure how to restore the previous ones.
 
no the problem is that the new qemu-kvm reorders the virtual pci cards...for me i had to fix the fixeds ips manually. and it it just a windows problem..my linux vms did not change
 
Same story on 2 different PVE hosts with Windows Server 2016 Standard and Windows Server 2019 Standard

Windows Server 2016 Standard
Code:
root@pve-node2:~# cat /etc/pve/qemu-server/204.conf 
agent: 1
boot: c
bootdisk: scsi0
cores: 12
cpu: host,flags=+pcid;+spec-ctrl;+pdpe1gb;+hv-tlbflush
ide2: none,media=cdrom
memory: 57344
name: s204-30
net0: virtio=4A:7F:02:8E:AE:26,bridge=vmbr0,tag=30
net1: virtio=FA:8A:03:FA:F4:ED,bridge=vmbr0,tag=30
numa: 1
onboot: 1
ostype: win10
scsi0: tank254:vm-204-disk-0,size=256G,ssd=1
scsi1: tank254:vm-204-disk-1,size=256G,ssd=1
scsi2: temp:204/vm-204-disk-0.qcow2,size=128G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=9fb172f0-fe95-4a61-98df-32f0e4dcc0da
sockets: 1
vmgenid: 1f32880c-4ee6-4d32-8906-5e94c87060eb

Code:
Windows Server 2019 Standard
root@075-pve-01833:~# cat /etc/pve/qemu-server/242.conf
agent: 1
bootdisk: scsi0
cores: 6
cpu: host
ide2: none,media=cdrom
machine: q35
memory: 16384
name: 075-srv-ts1
net0: virtio=D6:5E:28:C4:A2:16,bridge=vmbr0,firewall=1,tag=10
numa: 0
onboot: 1
ostype: win10
scsi0: tank:vm-242-disk-0,size=256G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=8d34f334-2871-4cd5-98fb-f5d6087711a0
sockets: 1
vmgenid: ebe8ab4e-4e94-463b-b083-47722279795a
 
Same story on 2 different PVE hosts with Windows Server 2016 Standard and Windows Server 2019 Standard

Windows Server 2016 Standard
Code:
root@pve-node2:~# cat /etc/pve/qemu-server/204.conf
agent: 1
boot: c
bootdisk: scsi0
cores: 12
cpu: host,flags=+pcid;+spec-ctrl;+pdpe1gb;+hv-tlbflush
ide2: none,media=cdrom
memory: 57344
name: s204-30
net0: virtio=4A:7F:02:8E:AE:26,bridge=vmbr0,tag=30
net1: virtio=FA:8A:03:FA:F4:ED,bridge=vmbr0,tag=30
numa: 1
onboot: 1
ostype: win10
scsi0: tank254:vm-204-disk-0,size=256G,ssd=1
scsi1: tank254:vm-204-disk-1,size=256G,ssd=1
scsi2: temp:204/vm-204-disk-0.qcow2,size=128G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=9fb172f0-fe95-4a61-98df-32f0e4dcc0da
sockets: 1
vmgenid: 1f32880c-4ee6-4d32-8906-5e94c87060eb

Code:
Windows Server 2019 Standard
root@075-pve-01833:~# cat /etc/pve/qemu-server/242.conf
agent: 1
bootdisk: scsi0
cores: 6
cpu: host
ide2: none,media=cdrom
machine: q35
memory: 16384
name: 075-srv-ts1
net0: virtio=D6:5E:28:C4:A2:16,bridge=vmbr0,firewall=1,tag=10
numa: 0
onboot: 1
ostype: win10
scsi0: tank:vm-242-disk-0,size=256G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=8d34f334-2871-4cd5-98fb-f5d6087711a0
sockets: 1
vmgenid: ebe8ab4e-4e94-463b-b083-47722279795a
Can you try Stefans proposed possible workaround from another thread, setting the machine back to an older model - see: https://forum.proxmox.com/threads/p...e-materials-of-windows-vms.84869/#post-372763
 
On the third PVE host (after PVE upgrade) Windows 10 Pro booted without any issues (no missed devices observer)
 
Hey guys, i think that is good that they did it.
The reason was: Freebsd Compatibility.

Earlier it was not possible to passthrough with pcie=1 option, on any freebsd guest.

The error was caused, because qemu used for their pcie root devices the same device ids as some build-in freebsd modules. Now this probably corrected this issue.

There is actually a very long thread about:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236922

If im right, this step was very necessary.
But as far i understand, freebsd should have made the change and not qemu. Because freebsd used the wrong device id's initially. However, it all doesn't matter, probably it's fixed now.
(So you cann passthrough nics/sr-iov to opnsense/pfsense or hba's to freenas etc. with pcie=1)

Need to test later if that's true. It's just what i assume, why qemu did this....

Cheers
 
I've had multiple instances on multiple hosts across different sites that had WIndows VMs treat their network interface as a new one, and therefore lose their static IP address configuration. Nothing else causes issues, the new devices, but the NIC does if you're statically configured.
 
Same issue for me too. One of them was an AD server so all my web apps and VPNs were hosed. I had to drive back home to fix it locally.
I dont know why but PVE updates seems to be a lot less stable than before. Even for home use.
 
I don't know why people that doesn't understand anything need to blame something...
The world would be so much better without those.

Fyi:
pve = Basically the Webinterface + Custom Kernel + Packet collection.
The only decision they can make is, to allow getting qemu 5.2 update or hold it back.

If you are a subscriber, you get that update probably a bit delayed, because there is a chance that qemu etc, reverts that changes and it was only a bug.

But however, have a good Saturday everyone xD
 
  • Like
Reactions: Alwin Antreich
pve = Basically the Webinterface + Custom Kernel + Packet collection.
The only decision they can make is, to allow getting qemu 5.2 update or hold it back.
That's actually not quite right, we're involved in QEMU development, add things like VMA backup or Proxmox Backup Integration (which is mapping a in rust written backend with bindings to the async coroutines of QEMU), triages bugs on upstream and send also patches there, further we add lots of fixes earlier than new QEMU releases and do the build and packaging our self.

PVE is lots more than "just some webinterface", we package all core packets ourselves provide a modern REST API for full clustering, ceph hyper-converged setups, HA, container, VMs, user and permissions management, lots of storage interfaces and much more.

The only decision they can make is, to allow getting qemu 5.2 update or hold it back.
Not true either.

Hey guys, i think that is good that they did it.
The reason was: Freebsd Compatibility.
Where did you get that from, your linked bug report still talks about patches on the FreeBSD side also the git log between v5.1.0..v5.2.0 does not include anything like that?
 
Last edited:
Same issue for me too. One of them was an AD server so all my web apps and VPNs were hosed. I had to drive back home to fix it locally.
I dont know why but PVE updates seems to be a lot less stable than before. Even for home use.

This package in only available in test and non-subscription repos - so nothing to complain about)
 
If I'm not mistaken even if old PCI devices id will be restored new old devices will remain as missed. Correct?
Maybe, we have still a hard time reproducing this, not meaning we don't believe it is not an issue, it just seems we're missing a feature-bit which needs to be toggled for this to happen and are currently trying and evaluating differences (the long time Windows VMs I have for testing were unaffected by this, so something needs to be different)

Did you try the referenced possible workaround by setting the machine type?

Further, do you have a rough idea under what QEMU version those VMs got created and thus Windows installed?
Stating a PVE version and year would already help a lot.
 
I I can confirm this. Tosay I was update my Proxmox environment and on all Windows basen Maschins the e1000 NIC was replaces with a new one and set IPs to DHCP. This would be very strange
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!