No Network Adapter in fresh Windows Server after Upgrade to Proxmox 7

Hy

Looks like the trick with q35 and v6.0 or 5.x does not work anymore with PVE 7.1.10

This is my current solution


Code:
System: Windows Server 2019 German
Machine: Default (i440fx) 6.1
Driver: VirtIO 0.1.217
Nic: virtIO

Disk Driver Virtio (2k19)
Disk SCSI
Installed by loading the driver via Windows Installation

After first Login
* Open Devicemanager
** Install Driver for "Ethernet-Controller"
***  Driver: D:\NetKVM\2k19
**** Installation hang with Code 56
** Kill Process "drvinst.exe"
** In Device Manager the Nic is now listet as "Unbekanntes Gerät"
** Install Driver for "Unbekanntes Gerät"
***  Driver: D:\NetKVM\2k19

Nic is now working




Question to Proxmox: Is the change "deault setting windows guest" from i440fx to q35 related to this bug?

Also interessting is that the RedHat Certificates from the driver is outdatet (26.1.2022) => I saw this in mmc -> certificate


kr
Roland
 
I installed Windows Server 2019 Standard (German edition) on PVE 7.2-3 inside of a i440fx-6.2 virtual machine and got a nic working.

During the Windows Server setup procedure there was no nic present.

After setup I mounted the virtio-win drivers (version 0.1.215) and installed virtio-win-gt-x64.msi to make sure I got most of the drivers.
I then shut down the vm to add a virtio nic to it and booted it up again.
The nic was present in the device manager but faulty (yellow icon, you know the drill).
Here another reboot was required. Took a little while longer but don't worry, it'll be fine.
I selected the nic in the device manager and updated the driver by manually setting it to "[my iso mount point]\NetKVM\2k19\amd64".
Windows tried to tell me it was already using the best driver for this device so I selected it again and "uninstalled the device".
After scanning for new hardware (magnifier on monitor icon inside the device manager's navbar) the nic showed up again without any problems.
From that point on even after unmounting the drivers iso and multiple reboots, the nic keeps working.

I also tried that whole process using version 0.1.217 of the virtio-win drivers but it didn't work. Not even virtio-win-gt-x64.msi ran without problems.
 
Last edited:
So nobody how solves this win 2019 or 2022. It's really annoying to install all in english and switch then to german with the language pack. So unnecessary this work. Why nobody can fix this?
 
So nobody how solves this win 2019 or 2022
Windows is proprietary, nobody outside of Microsoft can look at the source for that reason, short of reverse engineering their binaries or hacking around, which may not even be legal and surely not a route we'll take.
Why nobody can fix this?
Ask microsoft why the messed up just by using a specific language..
 
Last edited:
I run into the same: Windows 2016 RDS Server migrated from VMWare to Proxmox 7.2.5 / virtio-0.1.217. Tried nearly everything from this thread, incl. some other, reimported 120gb x-times, sitting for 8 hours without Nic now. Looks like i need to go back to VMWare with this one. Not good.........
 
SOLVED:
- reimport disk
- ide hdu
- set to q35 6.0, no network card to see it comes up -> YES
- gerätemanager show absent devices -> delete old e1000
- uninstall vmware tools
- install virtio full 215
- shutdown
- add virtio network card
- boot and found network

pffffffffffffffffffffffffffffff
 
  • Like
Reactions: _gabriel
Hi,

yes, that works @reschm!
I've done the following steps:

1.) Install Windows 10
2.) Switch off the fresh installed VM
3.) Mount the virtio win 01.208.1 iso from the above called link
4.) Install the VirtIo drivers (I got an error the first time, I think because I've used the x86 Installer and not the x64 Installer)
5.) Restart, everything is fine.
 
Last edited:
the problem seems to be resolved with latest qemu 7.2 from pvetest repo. maybe somebody who has system with pvetest repo enabled can test with win 20xx and confirm ?
 
FYI: QEMU 7.2 has been made available on no-subscription just as of now.

While we understand that all of you can't wait until this unpleasant behavior is finally solved, it's still a major new release of a core package, and such we'd like to give it much broader visibility for a few weeks before we'll move it to enterprise.

Please note that if you already tested your workload on the new version in some test bed, and found no regression for your HW and use case, you can always enable the no-subscription repository (e.g., through the Web UI) temporarily and then pull in only the new QEMU 7.2 update by issuing:
Bash:
apt update
apt install pve-qemu-kvm

Remember to disable the no-subscription repository afterwards again.
 
  • Like
Reactions: RolandK and Firebat
FYI: QEMU 7.2 has been made available on no-subscription just as of now.

While we understand that all of you can't wait until this unpleasant behavior is finally solved, it's still a major new release of a core package, and such we'd like to give it much broader visibility for a few weeks before we'll move it to enterprise.

Please note that if you already tested your workload on the new version in some test bed, and found no regression for your HW and use case, you can always enable the no-subscription repository (e.g., through the Web UI) temporarily and then pull in only the new QEMU 7.2 update by issuing:
Bash:
apt update
apt install pve-qemu-kvm

Remember to disable the no-subscription repository afterwards again.
@t.lamprecht thanks for your advice.
For me, this behavior isn't fixed with QEMU 7.2.
I have tried all the mentioned versions from this thread, but can't find a suitable way for me.
What other ideas are there?

My current setup is my initial setup. Several VMs (Geman-ISO-Win10/SRV2k19) with i440fx-6.1 and VirtIO.
The curious thing is that I get a DHCP address that I can ping externally, but not internally. As soon as I set it to static, nothing works. The problem occurs since a reboot of my VMs.

Best regards
Torben
 
@t.lamprecht thanks for your advice.
For me, this behavior isn't fixed with QEMU 7.2.
I have tried all the mentioned versions from this thread, but can't find a suitable way for me.
What other ideas are there?

My current setup is my initial setup. Several VMs (Geman-ISO-Win10/SRV2k19) with i440fx-6.1 and VirtIO.
The curious thing is that I get a DHCP address that I can ping externally, but not internally. As soon as I set it to static, nothing works. The problem occurs since a reboot of my VMs.

Best regards
Torben

did the VMs have that problem before qemu 7.2 ?

iirc, the problem gets inherited if it happened before qemu 7.2

so you may need to entirely remove networking and re-add after qemu 7.2. (not sure, though)
 
  • Like
Reactions: t.lamprecht
did the VMs have that problem before qemu 7.2 ?

iirc, the problem gets inherited if it happened before qemu 7.2

so you may need to entirely remove networking and re-add after qemu 7.2. (not sure, though)
I guess not, but I'm not sure.
What do you mean exactly with
so you may need to entirely remove networking and re-add after qemu 7.2.
I deleted every single network device from each vm and re-added. I uninstalled all drivers and tried different versions.
I have no more ideas...
BR
 
known bug with German version before QEMU 7.2 is there is not network device at all in the control panel / ncpa.cpl,
I'm not sure your problem is related.
Have you tried with the regular E1000 ?
Have you tried with an us/english version ?

edit: correction : driver in device manager can be properly installed, but ncpa.cpl has no devices.
 
Last edited:
known bug with German version before QEMU 7.2 is there is not network device at all in the control panel / ncpa.cpl,
device manager can't use the virtio driver.
I'm not sure your problem is related.
Have you tried with the regular E1000 ?
Have you tried with an us/english version ?
My problem seems to be in context with this issue.
I am only using German ISOs.
Using E1000 doesn't bring any improvements.

I haven't tried with us version at this moment because it might be a workaround but not a solution.

Is here anybody who can help?

BR
 
Cross posting this same update from both the GitHub thread for the virtio drives (https://github.com/virtio-win/kvm-guest-drivers-windows/issues/750#issuecomment-1814759725) and the GitLab thread for the qemu project (https://gitlab.com/qemu-project/qemu/-/issues/774)

We just ran into this in our KVM/QEMU based platform with a qemu 6.2 based platform. Everything was a-ok running qemu 4.2 prior, but as soon as the upgrade to 6.2 happened, we started hitting this problem on german windows as all these threads describe.

Looking through the git history on qemu `hw/i386/acpi-build.c` there isn't a crazy amount of churn between 6.2 and 7.2, so we can narrow the windows pretty quickly.

- <6.2 upstream cutoff> 211afe5c69 hw/i386/acpi-build: Deny control on PCIe Native Hot-plug in _OSC
- 2914fc61d5 pci: Export pci_for_each_device_under_bus*()
- 36efa250a4 hw/i386/pc: Allow instantiating a virtio-iommu device
- 867e9c9f4c hw/i386/pc: Remove x86_iommu_get_type()

Based on what the GitLab thread here says for git bisect windows: https://gitlab.com/qemu-project/qemu/-/issues/774#note_1178918198

DEPENDENCY 1
hw/i386/acpi-build: Avoid 'sun' identifier - https://github.com/qemu/qemu/commit/9c2d83f5a0ef558f8882998af6cb800101243655

DEPENDENCY 2
acpi: x86: refactor PDSM method to reduce nesting - https://github.com/qemu/qemu/commit/a12cf6923ce121633d877cf3ec53b2bcc85763ca

FIXES THE ISSUE
x86: acpi: _DSM: use Package to pass parameters - https://github.com/qemu/qemu/commit/467d099a2985c1e1bd41b234529d7f2262fd2e27

We tested this out on both pc and q35 VM's with these patches applied, and the issue was immediately fixed

Now, there is some talk in these threads about 8.1 not working again.

I haven't tested this, but if I had to take a wild guess, I'd guess that this commit breaks it again
x86: acpi: workaround Windows not handling name references in Package properly - https://github.com/qemu/qemu/commit/44d975ef340e2f21f236f9520c53e1b30d2213a4
 
  • Like
Reactions: fiona and RolandK

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!