New VM already installed OS

Cutthroat

New Member
Sep 24, 2025
8
0
1
New to Proxmox but seeing something very weird. I am creating a NEW Linux VM. I go through the Wizard to create the VM spec's, tell it to mount the Ubuntu ISO, create a new disk and so forth. When I power on the VM, it boots to Windows. What in the world? I am not telling it to use any existing disk. I am not even sure where this Windows disk came from, maybe from some Test Windows VM I created and deleted in the past. But how is PVE booting a new VM with some old random Windows Disk? My new disk I create when setting up the Linux VM is 80GB, this Windows VM is 100GB. Also, if I create the disk size to 5gb, it boots into the linux install like I would expect. If I create the disk to 40GB, Windows boots into a repair mode and can't fully boot. Any idea where in the heck this Windows boot is coming from. I don't have a PXE server, so not trying to boot from any network setup.

1771866317532.png

1771866342418.png

Thanks
-Craig
 
Since you've hidden the checkboxes on the boot order, can you tell us whether you actually go through the Linux install and commit the disk wiping stage?
 
Boot order is SCSI0, ide0. If I change to ide0 first, it does boot the ISO, but why is there any sort of OS on a newly created disk? I haven't gone through the linux install yet because this was just so weird that a Windows OS is booting. Thanks.
 
I thought of that as well. I tried creating with a new VMID and still boots to Windows. I did try creating 2 new VM's. The first one boots to Windows, the second one boots to the Linux installer like you would expect. When I delete the VM, I did confirm no disk file is left over on the LVM storage device. Thanks.
 
Removing an LVM slice (if VRTX-SSD is LVM based storage) is not enough to erase the data on the underlying disk. This has been discussed in the forum a few times, although I cant give you a link at this moment.
You can check by running "lsblk" and 'blkid' on the hypervisor against the LVM slice that gets created for the new VM.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Removing an LVM slice (if VRTX-SSD is LVM based storage) is not enough to erase the data on the underlying disk. This has been discussed in the forum a few times, although I cant give you a link at this moment.
You can check by running "lsblk" and 'blkid' on the hypervisor against the LVM slice that gets created for the new VM.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yes, VRTX-SSD is LVM based. So, what is the correct way; using the PVE GUI to Remove (Delete) a VM isn't enough? Can you provide an example on how to use the above commands? Thanks.
 
In the PVE API here:
https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/disks/lvmthin
and here:
https://pve.proxmox.com/pve-docs/api-viewer/#/nodes/{node}/disks/wipedisk

You can make a call to "Also wipe disks so they can be repurposed afterwards."

I'd have to look to see how it is implemented in UI or CLI, I do not recall right now. We don't use LVM in our implementation, so this is not on the top of my mind.

You can use wipefs on the LVM slice from the hypervisor, prior to starting VM. This should clear up any remaining data.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox