[SOLVED] Overwritten disk size in vm conf file, no longer matches actual size

Thebigluke

Member
Feb 14, 2022
5
0
6
50
Hi all,

TLDR: conf file disk size set to 256G, actual storage disk size is 935GB, VM won't launch (TASK ERROR: timeout: no zvol device link for 'vm-101-disk-2' found after 300 sec found.)

-----

So I have 2 different setups for each of my VMs, depending on which one is launched with the pci/usb and gpu passthrough.

For example if I launch my Windows 11 vm (101), a batch script will copy 101.conf.main (has all passthroughs) to 101.conf, and the other vms will copy 201.conf.base (no passthroughs) to 201.conf, so basically only 1 machine can use the passthroughs at any one time, the others are forced to use the console/remote desktop if launched.

My issue right now is that I had a 256G disk assigned to my Windows 11 vm, then I resized it to take advantage of the full 1TB disk space, using gparted to move the recovery partition to the end so Windows could access the free space, all was good and worked perfectly, until I launched another VM...

You guessed it, I had forgotten to change the disk size in my 101.conf.base and 101.conf.main files, so they overwrote the newly created disk size in the original 101.conf, and I have no idea what it chose as the correct size - needless to say the vm won't launch at all now.

I get the error: TASK ERROR: timeout: no zvol device link for 'vm-101-disk-2' found after 300 sec found.

The only thing that changed was the disk size, so I assume this is the problem.

I've checked on my storage and that particular disk is using 935.23GB, I've randomly tried various sizes in the 101.conf file, multiples of 64 etc and none have worked...

Any ideas on what it should be set to before I wipe the whole thing and start over?!

(Thanks for reading this overly long post of a seemingly simple issue)
 
Ok, I gave up and destroyed the Windows 11 vm, cloning a fresh 64gb one from a template.

After resizing the disk to take the entire 1TB as outlined above using gparted , and then editing my 101,conf.main and 101.conf.base vm files accordingly (this time!) I realised the old ones were set to access the previous zpool disk (I'd forgotten I'd moved from 64gb install on nvme to a 1tb ssd!!).

So it looks like I'm to blame entirely here, for not realising it was referencing a previous zpool disk which didn't exist any more - DUH!

I have no idea if changing the size of the disk actaully breaks much now, I thought that was my issue, but apparently I'm just stupid...
 
Hi,
glad you were able to solve your issue. Please mark the thread as resolved by editing it and selecting the appropriate prefix. FYI, the disk size in the configuration file is (mostly) informational. Proxmox VE will check the actual size whenever it needs it (e.g. local storage migration to allocate disk on the target). You can use qm rescan --vmid <ID> to update the config to the actual disk sizes (and detect orphaned disks) for a specific VM.