New Windows VM Startup Recovery

nalysarbur

New Member
Nov 7, 2025
5
0
1
Hello,

I have a new Proxmox environment with only a single VM. When I try to create a new Windows VM, and select Windows as the guest OS, the machine boots into startup repair even if I have not attached any media. If I create a brand-new VM and do attach installation media, it still starts up into the startup repair and not into the installation ISO without going to the boot menu.

Is this intended? Shouldn't a new VM with formatted disks have absolutely no data on them?

Thank you,
 
I am not sure if I actually understand your situation. When you create a new virtual disk it should contain virtual zeros. ("Virtual" because usually there is no actual space assigned, it is "sparse".)

Any chance you did choose a "wrong" installation media?

Some hints regarding the installation of Windows are there: https://pve.proxmox.com/wiki/Windows_11_guest_best_practices
 
That's the part that makes no sense. When I create a new VM and in the OS tab I select Do not use any media, and for Guest OS type I choose Windows it creates a brand new VM that automatically loads into startup repair after it boots. I confirmed there is no media in the CD/DVD drive. I also confirmed there were no disks left over from a previous VM that shared the VM ID. I also tried making an additional VM with a new ID that was never used before and it still happened.


OS_Config.png
 
how can efi storage on lvm have disk format qemu2 ?

when i select efi storage on lvm, it automatically switches to raw and is greyed

1762595528992.png

in your video it looks like this

1762596120415.png
 
Last edited:
Hi,

Can you reproduce with VM ID that was never used (9999 for example) ?
How is configured your "vol-01" ?

Best regards,
 
Hi,

Can you reproduce with VM ID that was never used (9999 for example) ?
How is configured your "vol-01" ?

Best regards,

I mentioned in a post earlier. I confirmed the behavior with a VM using an ID that was never used before.

vol-01 is pointing to a Pure Storage array via iSCSI which was configured using the following guide. I understand the version is older, but I was able to follow the steps without issue.

https://support.purestorage.com/bun...opics/t_connecting_proxmox_to_flasharray.html


@nalysarbur , can you please post the contents of your both virtual machine configuration files from /etc/pve/qemu-server , your storage.cfg and the output of pvs/vgs/lvs commands ?

Here is stoarge.cfg:

Code:
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

lvmthin: local-lvm
        thinpool data
        vgname pve
        content images,rootdir

lvm: vol-01
        vgname pure-storage-vg
        content images
        saferemove 0
        shared 1
        snapshot-as-volume-chain 1


Here are the results of the commands BEFORE the VM is created:

Code:
root@eovmhostd09:/etc/pve# pvs
  PV                                            VG              Fmt  Attr PSize     PFree   
  /dev/mapper/3624a93704460047a0e2745ec00086120 pure-storage-vg lvm2 a--  <1024.00g <991.99g
  /dev/sda3                                     pve             lvm2 a--   <222.00g   16.00g
root@eovmhostd09:/etc/pve# vgs
  VG              #PV #LV #SN Attr   VSize     VFree   
  pure-storage-vg   1   1   0 wz--n- <1024.00g <991.99g
  pve               1   3   0 wz--n-  <222.00g   16.00g
root@eovmhostd09:/etc/pve# lvs
  LV                  VG              Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0.qcow2 pure-storage-vg -wi-ao----  <32.01g                                                   
  data                pve             twi-a-tz-- <129.85g             0.00   1.23                           
  root                pve             -wi-ao----  <65.50g                                                   
  swap                pve             -wi-ao----    8.00g                                                   
root@eovmhostd09:/etc/pve# pvesm list vol-01
Volid                      Format  Type             Size VMID
vol-01:vm-100-disk-0.qcow2 qcow2   images    34359738368 100

Here are the results of the commands AFTER the VM is created:

Code:
root@eovmhostd09:~# pvs
  PV                                            VG              Fmt  Attr PSize     PFree 
  /dev/mapper/3624a93704460047a0e2745ec00086120 pure-storage-vg lvm2 a--  <1024.00g 891.96g
  /dev/sda3                                     pve             lvm2 a--   <222.00g  16.00g
root@eovmhostd09:~# vgs
  VG              #PV #LV #SN Attr   VSize     VFree 
  pure-storage-vg   1   4   0 wz--n- <1024.00g 891.96g
  pve               1   3   0 wz--n-  <222.00g  16.00g
root@eovmhostd09:~# lvs
  LV                  VG              Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-100-disk-0.qcow2 pure-storage-vg -wi-ao----  <32.01g                                                   
  vm-101-disk-0.qcow2 pure-storage-vg -wi-ao----    4.00m                                                   
  vm-101-disk-1.qcow2 pure-storage-vg -wi-ao---- <100.02g                                                   
  vm-101-disk-2       pure-storage-vg -wi-ao----    4.00m                                                   
  data                pve             twi-a-tz-- <129.85g             0.00   1.23                           
  root                pve             -wi-ao----  <65.50g                                                   
  swap                pve             -wi-ao----    8.00g                                                   
root@eovmhostd09:~# pvesm list vol-01
Volid                      Format  Type              Size VMID
vol-01:vm-100-disk-0.qcow2 qcow2   images     34359738368 100
vol-01:vm-101-disk-0.qcow2 qcow2   images          540672 101
vol-01:vm-101-disk-1.qcow2 qcow2   images    107374182400 101
vol-01:vm-101-disk-2       raw     images         4194304 101


Here are the config files:

Code:
root@eovmhostd09:/etc/pve/qemu-server# ls
100.conf  101.conf

Here is the config file for the Linux test VM, the one with ID 100:

Code:
boot: order=scsi0;ide2;net0
cores: 2
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 8192
meta: creation-qemu=10.1.2,ctime=1762470467
name: TestVM
net0: virtio=BC:24:11:36:B9:65,bridge=vmbr0,firewall=1,tag=3013
numa: 0
onboot: 1
ostype: l26
scsi0: vol-01:vm-100-disk-0.qcow2,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=d7d00ee0-e2fa-49b6-baa8-5cdf3c362875
sockets: 2
vmgenid: e5cae219-44b7-40c6-ab57-cbdc9e47b7c5

Here is the config file for the Windows test VM, the one with ID 101:

Code:
bios: ovmf
boot: order=scsi0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
efidisk0: vol-01:vm-101-disk-0.qcow2,efitype=4m,pre-enrolled-keys=1,size=528K
ide2: none,media=cdrom
machine: pc-q35-10.1
memory: 8192
meta: creation-qemu=10.1.2,ctime=1762895914
name: TestWindowsVM
net0: vmxnet3=BC:24:11:D9:61:35,bridge=vmbr0,firewall=1,tag=3013
numa: 0
onboot: 1
ostype: win11
scsi0: vol-01:vm-101-disk-1.qcow2,iothread=1,size=100G
scsihw: virtio-scsi-single
smbios1: uuid=f0795c47-5fd5-4266-934c-dd0f22117289
sockets: 2
tpmstate0: vol-01:vm-101-disk-2,size=4M,version=v2.0
vmgenid: b1155e20-85ed-4db1-8a45-5be40baacff2

Thank you for taking the time to look at this with me.
 
Interestingly, if I create a SECOND Windows VM while I still have the first Windows VM, I get the expected result where there is no boot device found.

There must be something where the data is not being purged completely.



OK, it has been some time since I wrote the top of this message and I believe I am able to re-create this on-demand.

  1. Follow the steps shown in the video I posted earlier EXCEPT, specify the disk type as IDE.
  2. Complete Windows Server installation
  3. Shutdown and delete the VM
  4. Create a NEW Windows VM following the steps shown in the video while specifying the disk type as iSCSI
  5. You should now see it boot into startup mode even with no media
Let me know if you're able to reproduce it as well. Hope this helps!