VM disk missing after resize

_anderson

New Member
Jan 22, 2025
1
0
1
Hi everyone,

I’m new to Proxmox, so apologies in advance if this is a mistake on my part, but I’m encountering an issue that seems odd, and I’m not sure if it’s a bug or something I misconfigured.

Here’s the situation: I had a Windows VM with two disks. The first disk was 52 GB, the main system drive, and the second was 500 GB, where I stored my Steam games. I also had GPU passthrough configured for this VM, but it was disabled at the time I performed the operations described below.

What happened: I moved the primary disk (52 GB) to a 2 TB ZFS Mirror volume (called vault-drives, same that had the 500 GB one). Then, I resized both disks—expanding the primary disk to 52 GB + 120 GB, and the second disk to 500 GB + 300 GB. The logs indicated that both operations were successful.

Shortly after I rebooted the pve system. However, when I tried to use it again, both disks were missing. The usage graph for my vault-drives in ZFS shows a reduction in space used, as if the disks were deleted during or after the resize operation.

Screenshot 2025-01-22 at 11.55.18.png


Thankfully, I do have a backup for the system disk and the VM, but unfortunately, I didn’t back up the 500 GB disk with the Steam games. While this isn’t a huge issue (I can re-download everything from Steam), it’s highly inconvenient as it will take a lot of time.

I’d appreciate any insight into what might have caused this. Was there something I overlooked during the disk move/resize? I can provide any relevant logs or additional details if needed.

Jan 22 11:04:26 bigboi pvedaemon[147101]: <root@pam> successful auth for user 'root@pam'
Jan 22 11:04:59 bigboi pvedaemon[146324]: <root@pam> update VM 100: -ide2 none,media=cdrom,size=5689984K
Jan 22 11:05:05 bigboi pvedaemon[146750]: <root@pam> update VM 100: -ide0 none,media=cdrom,size=707456K
Jan 22 11:05:27 bigboi pvedaemon[146750]: <root@pam> starting task UPID:bigboi:0002476E:003CB3AC:6790FB27:resize:100:root@pam:
Jan 22 11:05:27 bigboi pvedaemon[149358]: <root@pam> update VM 100: resize --disk scsi1 --size +300G
Jan 22 11:05:27 bigboi pvedaemon[146750]: <root@pam> end task UPID:bigboi:0002476E:003CB3AC:6790FB27:resize:100:root@pam: OK
Jan 22 11:06:27 bigboi pvedaemon[146324]: <root@pam> starting task UPID:bigboi:00024851:003CCB33:6790FB63:qmmove:100:root@pam:
Jan 22 11:06:27 bigboi pvedaemon[149585]: <root@pam> move disk VM 100: move --disk scsi0 --storage vault-drives
Jan 22 11:08:50 bigboi pvedaemon[147101]: <root@pam> successful auth for user 'root@pam'
Jan 22 11:09:09 bigboi pvedaemon[146324]: <root@pam> end task UPID:bigboi:00024851:003CCB33:6790FB63:qmmove:100:root@pam: OK
Jan 22 11:10:00 bigboi pvedaemon[147101]: <root@pam> starting task UPID:bigboi:00024BA0:003D1E42:6790FC38:resize:100:root@pam:
Jan 22 11:10:00 bigboi pvedaemon[150432]: <root@pam> update VM 100: resize --disk scsi0 --size +120G
Jan 22 11:10:00 bigboi pvedaemon[147101]: <root@pam> end task UPID:bigboi:00024BA0:003D1E42:6790FC38:resize:100:root@pam: OK
Jan 22 11:10:13 bigboi pvedaemon[150537]: starting termproxy UPID:bigboi:00024C09:003D2390:6790FC45:vncshell::root@pam:
Jan 22 11:10:13 bigboi pvedaemon[147101]: <root@pam> starting task UPID:bigboi:00024C09:003D2390:6790FC45:vncshell::root@pam:
Jan 22 11:10:14 bigboi pvedaemon[146750]: <root@pam> successful auth for user 'root@pam'
Jan 22 11:10:14 bigboi login[150541]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jan 22 11:10:46 bigboi pvedaemon[146750]: vm 100 - unable to parse config: hostpci0%3A 0000%3A07%3A00,pcie=1,x-vga=1,romfile=>
Jan 22 11:10:48 bigboi pvedaemon[147101]: <root@pam> end task UPID:bigboi:00024C09:003D2390:6790FC45:vncshell::root@pam: OK
Jan 22 11:10:48 bigboi pvedaemon[147101]: vm 100 - unable to parse config: hostpci0%3A 0000%3A07%3A00,pcie=1,x-vga=1,romfile=>
Jan 22 11:10:49 bigboi pvedaemon[146324]: vm 100 - unable to parse config: hostpci0%3A 0000%3A07%3A00,pcie=1,x-vga=1,romfile=>
Jan 22 11:11:41 bigboi pvedaemon[146750]: <root@pam> starting task UPID:bigboi:00024D32:003D45BB:6790FC9D:vncshell::root@pam:
Jan 22 11:11:41 bigboi pvedaemon[150834]: starting termproxy UPID:bigboi:00024D32:003D45BB:6790FC9D:vncshell::root@pam:
Jan 22 11:11:41 bigboi pvedaemon[147101]: <root@pam> successful auth for user 'root@pam'
Jan 22 11:11:41 bigboi login[150837]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jan 22 11:12:55 bigboi pvedaemon[146750]: <root@pam> end task UPID:bigboi:00024D32:003D45BB:6790FC9D:vncshell::root@pam: OK
Jan 22 11:13:07 bigboi pvedaemon[151131]: starting termproxy UPID:bigboi:00024E5B:003D673F:6790FCF3:vncshell::root@pam:
Jan 22 11:13:07 bigboi pvedaemon[147101]: <root@pam> starting task UPID:bigboi:00024E5B:003D673F:6790FCF3:vncshell::root@pam:
Jan 22 11:13:07 bigboi pvedaemon[146324]: <root@pam> successful auth for user 'root@pam'
Jan 22 11:13:07 bigboi login[151134]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jan 22 11:13:10 bigboi pvedaemon[147101]: <root@pam> end task UPID:bigboi:00024E5B:003D673F:6790FCF3:vncshell::root@pam: OK
Jan 22 11:13:18 bigboi systemd[1]: Stopping pvedaemon.service - PVE API Daemon...
Jan 22 11:13:19 bigboi pvedaemon[2560]: received signal TERM
Jan 22 11:13:19 bigboi pvedaemon[2560]: server closing
Jan 22 11:13:19 bigboi pvedaemon[147101]: worker exit
Jan 22 11:13:19 bigboi pvedaemon[146750]: worker exit
Jan 22 11:13:19 bigboi pvedaemon[146324]: worker exit
Jan 22 11:13:19 bigboi pvedaemon[2560]: worker 147101 finished
Jan 22 11:13:19 bigboi pvedaemon[2560]: worker 146750 finished
Jan 22 11:13:19 bigboi pvedaemon[2560]: worker 146324 finished
Jan 22 11:13:19 bigboi pvedaemon[2560]: server stopped
Jan 22 11:13:20 bigboi systemd[1]: pvedaemon.service: Deactivated successfully.
Jan 22 11:13:20 bigboi systemd[1]: Stopped pvedaemon.service - PVE API Daemon.
Jan 22 11:13:20 bigboi systemd[1]: pvedaemon.service: Consumed 4min 30.078s CPU time.
-- Boot 21d1e8d81a5e4b1b8612916275a5b511 --
Jan 22 11:15:01 bigboi systemd[1]: Starting pvedaemon.service - PVE API Daemon...
Jan 22 11:15:01 bigboi pvedaemon[2559]: starting server
Jan 22 11:15:01 bigboi pvedaemon[2559]: starting 3 worker(s)
Jan 22 11:15:01 bigboi pvedaemon[2559]: worker 2560 started
Jan 22 11:15:01 bigboi pvedaemon[2559]: worker 2561 started
Jan 22 11:15:01 bigboi pvedaemon[2559]: worker 2562 started
Jan 22 11:15:01 bigboi systemd[1]: Started pvedaemon.service - PVE API Daemon.
Jan 22 11:15:03 bigboi pvedaemon[2560]: <root@pam> successful auth for user 'root@pam'
Jan 22 11:15:04 bigboi pvedaemon[2581]: starting termproxy UPID:bigboi:00000A15:00001C36:6790FD68:vncshell::root@pam:
Jan 22 11:15:04 bigboi pvedaemon[2562]: <root@pam> starting task UPID:bigboi:00000A15:00001C36:6790FD68:vncshell::root@pam:
Jan 22 11:15:04 bigboi pvedaemon[2560]: <root@pam> successful auth for user 'root@pam'
Jan 22 11:15:04 bigboi login[2586]: pam_unix(login:session): session opened for user root(uid=0) by (uid=0)
Jan 22 11:15:14 bigboi pvedaemon[2562]: <root@pam> end task UPID:bigboi:00000A15:00001C36:6790FD68:vncshell::root@pam: OK
Jan 22 11:16:43 bigboi pvedaemon[2561]: <root@pam> update VM 100: -delete vga
Jan 22 11:16:55 bigboi pvedaemon[2560]: <root@pam> starting task UPID:bigboi:00000BD0:000047BA:6790FDD7:qmstart:100:root@pam:
Jan 22 11:16:55 bigboi pvedaemon[3024]: start VM 100: UPID:bigboi:00000BD0:000047BA:6790FDD7:qmstart:100:root@pam:
Jan 22 11:16:55 bigboi pvedaemon[3024]: volume 'vault-drives:100/vm-100-disk-1.qcow2' does not exist
Jan 22 11:16:55 bigboi pvedaemon[2560]: <root@pam> end task UPID:bigboi:00000BD0:000047BA:6790FDD7:qmstart:100:root@pam: volu>
Jan 22 11:17:54 bigboi pvedaemon[2562]: <root@pam> starting task UPID:bigboi:00000CF6:00005EB6:6790FE12:vncshell::root@pam:
Jan 22 11:17:54 bigboi pvedaemon[3318]: starting termproxy UPID:bigboi:00000CF6:00005EB6:6790FE12:vncshell::root@pam:
Jan 22 11:17:54 bigboi pvedaemon[2561]: <root@pam> successful auth for user 'root@pam'

root@bigboi:~# pveversion -v
proxmox-ve: 8.3.0 (running kernel: 6.8.12-6-pve)
pve-manager: 8.3.2 (running version: 8.3.2/3e76eec21c4a14a7)
proxmox-kernel-helper: 8.1.0
proxmox-kernel-6.8: 6.8.12-6
proxmox-kernel-6.8.12-6-pve-signed: 6.8.12-6
proxmox-kernel-6.8.12-4-pve-signed: 6.8.12-4
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2+deb12u1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.4
libpve-access-control: 8.2.0
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.10
libpve-cluster-perl: 8.0.10
libpve-common-perl: 8.2.9
libpve-guest-common-perl: 5.1.6
libpve-http-server-perl: 5.1.2
libpve-network-perl: 0.10.0
libpve-rs-perl: 0.9.1
libpve-storage-perl: 8.3.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.5.0-1
proxmox-backup-client: 3.3.2-1
proxmox-backup-file-restore: 3.3.2-2
proxmox-firewall: 0.6.0
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.3.1
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.7
proxmox-widget-toolkit: 4.3.3
pve-cluster: 8.0.10
pve-container: 5.2.3
pve-docs: 8.3.1
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.2
pve-firewall: 5.1.0
pve-firmware: 3.14-2
pve-ha-manager: 4.0.6
pve-i18n: 3.3.2
pve-qemu-kvm: 9.0.2-4
pve-xtermjs: 5.3.0-3
qemu-server: 8.3.3
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.6-pve1

root@bigboi:~# zpool status
pool: atlas
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
atlas ONLINE 0 0 0
nvme-WDC_WDS480G2G0C-00AJM0_21093Z804040 ONLINE 0 0 0

errors: No known data errors

pool: plug
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
plug ONLINE 0 0 0
usb-Realtek_RTL9210_NVME_012345678904-0:0 ONLINE 0 0 0

errors: No known data errors

pool: tank
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-ADATA_SU650_4N33214R6TO3 ONLINE 0 0 0
ata-ADATA_SU650_4N3321L3EZ0L ONLINE 0 0 0

errors: No known data errors

pool: vault
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
vault ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
nvme-WD_Green_SN350_2TB_223320800824_1 ONLINE 0 0 0
nvme-WD_Green_SN350_2TB_23171W803842_1 ONLINE 0 0 0

errors: No known data errors

root@bigboi:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
atlas 425G 5.27G 96K /atlas
atlas/temp 425G 5.27G 425G /atlas/temp
plug 152G 62.9G 104K /plug
plug/vm-drives 152G 62.9G 152G /plug/vm-drives
tank 262G 168G 104K /tank
tank/files 244G 168G 244G /tank/files
tank/pve-backups 18.3G 168G 18.3G /tank/pve-backups
tank/share 3.45M 168G 3.45M /tank/share
vault 547G 1.22T 172G /vault
vault/share 96K 1.22T 96K /vault/share
vault/temp 375G 1.22T 375G /vault/temp
vault/vm-drives 104K 1.22T 104K /vault/vm-drives

root@bigboi:~# cat /etc/pve/nodes/bigboi/qemu-server/100.conf.bk
bios: ovmf
boot: order=scsi0;ide0;ide2;net0
cores: 4
cpu: host
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
#hostpci0: 0000:07:00,pcie=1,x-vga=1,romfile=nvidia-3060.rom
ide0: local:iso/virtio-win.iso,media=cdrom,size=707456K
ide2: local:iso/Win10_22H2_BrazilianPortuguese_x64v1.iso,media=cdrom,size=5689984K
machine: pc-q35-9.0
memory: 32768
meta: creation-qemu=9.0.2,ctime=1737075645
name: Windows
net0: virtio=BC:24:11:BE:17:8F,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
scsi0: local-lvm:vm-100-disk-1,iothread=1,size=52G
scsihw: virtio-scsi-single
smbios1: uuid=56025a1c-5b44-4495-8ba8-db7729023ec9
sockets: 1
tpmstate0: local-lvm:vm-100-disk-2,size=4M,version=v2.0
unused0: plug-drives:100/vm-100-disk-0.qcow2
usb1: host=05ac:024f,usb3=1
usb2: host=046d:c52b,usb3=1
usb3: host=046d:c534,usb3=1
vga: none
virtio0: plug-drives:100/vm-100-disk-1.qcow2,backup=0,discard=on,iothread=1,size=250G
vmgenid: 23fdea0e-d02e-4a81-9dd7-5034f38f5fa3

I don't think this is related to the issue, but another strange thing I noticed was that I had a line in my configuration file for GPU passthrough that was commented out. After performing the disk resize operations, the commented line appeared to have been altered with some strange additional characters. When I later decided to test GPU passthrough, I uncommented the line after I rebooted the system. I noticed that and ended up restoring the original line from a backup of the configuration file and started the VM. That’s when I encountered the issue where the VM wouldn’t start, and it reported that the disks no longer existed.

Thanks in advance for any help
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!