[SOLVED] PVE 6.3.4 lost hardware materials of windows VMs

Lada

Member
Oct 23, 2019
13
0
21
44
Hello,

I updated my PVE from 6.3.3 to 6.3.4 and at reboot I constated that all my Windows VMs (XP and 7) lost their hardware configuration.
I mean Windows started to install once again all the drivers (with success), I got a new network interface (and lost my previous network settings), I got a new monitor/graphic card (lost my screen resolution), .... But the hardware configuration in PVE didn't change.

Even if I restore a previous backup (made under PVE 6.3.3 and less) I have the same behaviour.

Is it a known trouble?

Thank you.
 
I assume this would be the effect of updating to QEMU 5.2. It's certainly not optimal behaviour though, my guess would be that QEMU switched some internal PCI layout around, and Windows now thinks it is running on an entirely different machine, thus reinstalling all drivers and such.

To confirm, can you give us the output of pveversion -v, as well as your VM config (qm config <vmid>)?

Then, could you test the following: Restore your old backup, but do not boot it yet on the new version. Run the following on your PVE node: qm set <vmid> -machine pc-i440fx-5.1 (or qm set <vmid> -machine pc-q35-5.1 if you have configured your VM to use q35, i440fx is the default). Then, start the VM and see if the problem still occurs.

This would not be a fix, just a workaround, but could help trace the issue.
 
As asked here are the informations:

~# pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.98-1-pve) pve-manager: 6.3-4 (running version: 6.3-4/0a38c56f) pve-kernel-5.4: 6.3-5 pve-kernel-helper: 6.3-5 pve-kernel-5.4.98-1-pve: 5.4.98-1 pve-kernel-5.4.78-2-pve: 5.4.78-2 pve-kernel-5.4.73-1-pve: 5.4.73-1 pve-kernel-5.4.60-1-pve: 5.4.60-2 pve-kernel-5.4.34-1-pve: 5.4.34-2 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.1.0-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: 0.8.35+pve1 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.20-pve1 libproxmox-acme-perl: 1.0.7 libproxmox-backup-qemu0: 1.0.3-1 libpve-access-control: 6.1-3 libpve-apiclient-perl: 3.1-3 libpve-common-perl: 6.3-4 libpve-guest-common-perl: 3.1-5 libpve-http-server-perl: 3.1-1 libpve-storage-perl: 6.3-7 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 4.0.6-2 lxcfs: 4.0.6-pve1 novnc-pve: 1.1.0-1 proxmox-backup-client: 1.0.8-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.4-5 pve-cluster: 6.2-1 pve-container: 3.3-4 pve-docs: 6.3-1 pve-edk2-firmware: 2.20200531-1 pve-firewall: 4.1-3 pve-firmware: 3.2-2 pve-ha-manager: 3.1-1 pve-i18n: 2.2-2 pve-qemu-kvm: 5.2.0-1 pve-xtermjs: 4.7.0-3 qemu-server: 6.3-5 smartmontools: 7.1-pve2 spiceterm: 3.1-1 vncterm: 1.6-2 zfsutils-linux: 2.0.3-pve1



~# qm config 129 agent: 1 balloon: 128 boot: cdn bootdisk: virtio0 cores: 1 description: ****** ide2: none,media=cdrom localtime: 1 memory: 256 name: SyncbackPro net0: virtio=B2:CC:64:B2:BC:70,bridge=vmbr0,firewall=1 numa: 0 onboot: 1 ostype: wxp smbios1: uuid=9c21489e-6e50-4bdc-83a6-7f201c2c187a sockets: 1 startup: order=9,up=20 tablet: 1 virtio0: local-lvm-2:vm-129-disk-0,cache=writeback,size=10G virtio1: local-lvm-2:vm-129-disk-1,cache=writeback,size=2G vmgenid: 4be1056a-af38-4044-8208-6cdc8f9ca283

I did what you said (after restoring an old backup):

~# qm set 129 -machine pc-i440fx-5.1 update VM 129: -machine pc-i440fx-5.1

But the problem occured again :(
 
I did what you said (after restoring an old backup):

~# qm set 129 -machine pc-i440fx-5.1 update VM 129: -machine pc-i440fx-5.1

But the problem occured again
Thanks for testing! I'll try to reproduce the issue and find a solution then...

As a workaround for now (in case you need it), you should be able to downgrade QEMU back to 5.1.0-8:
Code:
apt install pve-qemu-kvm=5.1.0-8 libproxmox-backup-qemu0=1.0.2-1
 
  • Like
Reactions: Moayad
Lada, still running XP... :cool: Stefan, thanks for looking into it. As I understand, there is nothing one can really do about it other than re-applying network settings. For me it is ok, but the fact that this comes as a surprise was not very nice.
 
Thanks for testing! I'll try to reproduce the issue and find a solution then...

As a workaround for now (in case you need it), you should be able to downgrade QEMU back to 5.1.0-8:
Code:
apt install pve-qemu-kvm=5.1.0-8 libproxmox-backup-qemu0=1.0.2-1
What I don't understand is how such a major change could have slipped through the testing without raising red flags.

For example: Any Windows 7 upgrade to Windows 10 doesn't allow many hardware changes, then Windows 10 becomes deactivated and nothing fixes it. For larger setups, multi-machine environment, I'm sure it's disasterous.

Of course, fixing each machine by hand is the solution, but my question is why and how decided this is ok to make such a change?
 
Last edited:
Dunno why people doesn't use dhcp... Im using dhcp even on the wan side... But aside from that.

This bug comes, because no one seen it, it happens only if your vm was created before pve 6 and you see it only if you have a statical ip setup.
Not that easy to test every case out there.

And not a reason to blame someone, especially if you are using a free product...
 
Dunno why people doesn't use dhcp... Im using dhcp even on the wan side... But aside from that.

This bug comes, because no one seen it, it happens only if your vm was created before pve 6 and you see it only if you have a statical ip setup.
Not that easy to test every case out there.

And not a reason to blame someone, especially if you are using a free product...

DHCP complicates matters. The mac address of the NIC actually changes, so even with a static DHCP address, the server would get a different ip address... You do realise it's not just the nic that is changed, but almost all the other hardware, do you?

I'm not criticising a free product at all, qemu is not part of proxmox as it is. I'm just wondering how this was never detected before. Surely that is a question that needs to be asked?
 
Last edited:
What I don't understand is how such a major change could have slipped through the testing without raising red flags.
I'm not criticising a free product at all, qemu is not part of proxmox as it is. I'm just wondering how this was never detected before. Surely that is a question that needs to be asked?
https://forum.proxmox.com/threads/w...my-windows-vms-6-3-4.84915/page-2#post-373308

And as you can read from:
https://forum.proxmox.com/threads/w...my-windows-vms-6-3-4.84915/page-2#post-373331
we found a reproducer, which was not straight forward, and we check it out more closely.
 
DHCP complicates matters. The mac address of the NIC actually changes, so even with a static DHCP address, the server would get a different ip address... You do realise it's not just the nic that is changed, but almost all the other hardware, do you?

I'm not criticising a free product at all, qemu is not part of proxmox as it is. I'm just wondering how this was never detected before. Surely that is a question that needs to be asked?

But you set a static macaddress in the Webinterface, even with a new nic it should be the same. That's hard then :D
 
  • Like
Reactions: masgo
I had the same "issue" that the hardware changed. But the MAC stayed the same, therefore DHCP worked as expected.
 
  • Like
Reactions: Ramalama
I made the update to PVE 6.3.6 and know I can restore my Windows VMs back to normal with the preset at QEMU 5.1.

Thank you.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!