Windows 10 VM stuck in Automatic Repair boot loop

tunguskar

New Member
Apr 29, 2022
17
1
3
As the title says the machine was fine and running but at any point Im not able to boot the machine. It Stucks on this blue screen saying "Automatic Repair mode". And does not go through

Im really stuck here. I com into the "secured" mode of windows. I found the the Memory.dmp but unfortunately I had not installed the "Windows debug utils" to view it with dumpchk.

I read over several threads, but nothing helped. I have two backups but it seem restoring Leeds to the same problem.

I think some setting in Proxmox could solve this problem but I really do not know which one. or where this comes from.

1651267876131.png
1651267899624.png

1651267728334.png

1651267766172.png
 

Attachments

  • 1651267675676.png
    1651267675676.png
    97.1 KB · Views: 13
Last edited:
I think some setting in Proxmox could solve this problem but I really do not know which one. or where this comes from.

Why do you think so? Did you change some settings?

If you have backups, you should also have the previous VM configuration and can probably compare the old ones to the current.
 
Why do you think so? Did you change some settings?

If you have backups, you should also have the previous VM configuration and can probably compare the old ones to the current.
I changed nothing on the VM and the settings of Proxmox but I did an update of Proxmox Version 7.x to 7.y. I'm not sure if this changed anything? The thread I find on the web bring several issues:

There are many reasons why the tool gets stuck in the dreaded automatic repair loop. This could be due to missing or corrupted systems files, including problems with Windows Registry, incompatible hard drives, file corruption in Windows Boot Manager, or even a faulty Windows update.
 
Why do you think so? Did you change some settings?

If you have backups, you should also have the previous VM configuration and can probably compare the old ones to the current.
unfortunately I have no backup of Proxmox itself, just the vm
 
I disabled now the auto repair option of Windows and now I get this error message:

0xc00000bb

Is there a way to disable the secure boot option?

1651395396113.png
 
when I boot into the command prompt of windows and type bootrec.exe /sacnos it finds nothing. But Im perfectly fine to boot in to fail safe mode of windows. So the OS is still there. No hints on this topic?
 
I can confirm that processor -> host, looks to be broken with Windows 10 guest on Proxmox 7.2
I did change this to qemu64 or kvm64 and both works and solve my automatic repair infinite loop.
Unfortunately doing so, I am losing nested virtualization, which is useful to play with WSL (windows subsystem linux) and Ubuntu 22.04.
 
You can define custom CPU Types:

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_virtual_machines_settings

Custom CPU Types​

You can specify custom CPU types with a configurable set of features. These are maintained in the configuration file /etc/pve/virtual-guest/cpu-models.conf by an administrator. See man cpu-models.conf for format details.
Specified custom types can be selected by any user with the Sys.Audit privilege on /nodes. When configuring a custom CPU type for a VM via the CLI or API, the name needs to be prefixed with custom-.
 
You can define custom CPU Types:

https://pve.proxmox.com/pve-docs/pve-admin-guide.html#qm_virtual_machines_settings

Custom CPU Types​

You can specify custom CPU types with a configurable set of features. These are maintained in the configuration file /etc/pve/virtual-guest/cpu-models.conf by an administrator. See man cpu-models.conf for format details.
Specified custom types can be selected by any user with the Sys.Audit privilege on /nodes. When configuring a custom CPU type for a VM via the CLI or API, the name needs to be prefixed with custom-.

No luck with those 2 lines:

Code:
/etc/pve/nodes/<nodename>/qemu-server/<ID>.conf
[...]
cpu: kvm64,flags=+hv-evmcs
reported-model=Westmere
[...]

Do you have a working setup to share?
 
hi,

which CPU model is running on your PVE hosts?
 
hi,

which CPU model is running on your PVE hosts?

Good point, thinking about Google Indexing of this thread which may bring other people here.
My solution upper, is for an Intel processor. AMD would require +svm instead of +vmx, if not mistaken.
 
Good point, thinking about Google Indexing of this thread which may bring other people here.
My solution upper, is for an Intel processor. AMD would require +svm instead of +vmx, if not mistaken.
so do you know what kind of cpu I should choose with my pentium Silver J5005 (Gemini Lake)? I'm unsure. I also do not use WSL on my windows machine.
 
Last edited:
I read that qemu64 and kvm64 are not the best ones, in term of performances.

You will find a list here:
https://qemu.readthedocs.io/en/latest/system/qemu-cpu-models.html

I would recommend to choose to emulate a cpu with a release date close to the host cpu you are using:

Preferred CPU models for Intel x86 hosts

The following CPU models are preferred for use on Intel hosts. Administrators / applications are recommended to use the CPU model that matches the generation of the host CPUs in use. In a deployment with a mixture of host CPU models between machines, if live migration compatibility is required, use the newest CPU model that is compatible across all desired hosts.

Cascadelake-Server, Cascadelake-Server-noTSX
Intel Xeon Processor (Cascade Lake, 2019), with “stepping” levels 6 or 7 only. (The Cascade Lake Xeon processor with stepping 5 is vulnerable to MDS variants.)

Skylake-Server, Skylake-Server-IBRS, Skylake-Server-IBRS-noTSX
Intel Xeon Processor (Skylake, 2016)

Skylake-Client, Skylake-Client-IBRS, Skylake-Client-noTSX-IBRS}
Intel Core Processor (Skylake, 2015)

Broadwell, Broadwell-IBRS, Broadwell-noTSX, Broadwell-noTSX-IBRS
Intel Core Processor (Broadwell, 2014)

Haswell, Haswell-IBRS, Haswell-noTSX, Haswell-noTSX-IBRS
Intel Core Processor (Haswell, 2013)

IvyBridge, IvyBridge-IBR
Intel Xeon E3-12xx v2 (Ivy Bridge, 2012)

SandyBridge, SandyBridge-IBRS
Intel Xeon E312xx (Sandy Bridge, 2011)

Westmere, Westmere-IBRS
Westmere E56xx/L56xx/X56xx (Nehalem-C, 2010)

Nehalem, Nehalem-IBRS
Intel Core i7 9xx (Nehalem Class Core i7, 2008)

Penryn
Intel Core 2 Duo P9xxx (Penryn Class Core 2, 2007)

Conroe
Intel Celeron_4x0 (Conroe/Merom Class Core 2, 2006)

***
Edit: But I consider this as a workaround, waiting that "cpu: host" will be fully supported again in Proxmox 7.2.x
I think that choosing "cpu : host", without such limitations with lower compatibility, is the best choice, as long as you are not running a Proxmox cluster with different CPUs. In that specific case, kvm64 or qemu64 are the best choice to be sure they will be supported on every proxmox configuration (being based on Intel or AMD CPUs).
 
Last edited:
Thanks @rcoll - do you have a link or source that it's a bug?
No
I don't have any evidence or support ticket number confirming that it's a bug.
But as my setup was perfectly working in Proxmox 7.1 and as it's suddenly broken in Proxmox 7.2... I am quite confident it's a bug related to linux kernel 5.15
And if you do some search in the forum... it's unfortunately not the first time that this "automatic repair infinite loop" happen...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!