Host becomes unresponsive when running GParted inside VM

gubbins

New Member
Mar 6, 2026
5
0
1
Hi everyone,

I ran into a serious issue on my Proxmox host and I'm trying to understand what happened before attempting to do any operation.

Environment
Proxmox VE host (recent version, fully updated)
VM disks use virtio (inside the VM they appear as vda)
VM host Windows Server
GParted was run from a live cd inside the VM

What I did
Inside a VM I booted into GParted to move a partition (200GB).
Inside GParted I only saw /dev/vda, which is expected since it's a virtio virtual disk.

During the operation the Proxmox WebUI became unresponsive after showing 30% completed in the GParted process.

Symptoms
Web UI stopped responding completely
SSH was also unreachable
Not responding on ping
The disk activity LEDs on the server kept blinking constantly (no pause, same pattern)
It looked like the system was stuck
The system remained in this state for over than 14 hours

Because the disks looked active, I assumed it might eventually finish, so I waited.

After ~16 hours nothing had changed:
Web UI still inaccessible
SSH was also unreachable
Not responding on ping
Led still blinking constantly (again no pause or different pattern)

At that point I powered the server off manually.

After reboot:
The host came back normally
I did not start any VMs
I immediately created backups of all VMs (i know, this has to be done before moving the partition)
Restored the problem VM (the one i was trying to move a partition) from the backup to a different proxmox server and try to turn it on. It was booting in Windows but my D drive was corrupted (according to Windows )

Questions
1. What's the best way to continue and is it possible to restore the drive?
2. Is there any known issue where partition operations inside a VM (GParted) can cause the Proxmox host to become unresponsive like this?

I want to understand the root cause before attempting any similar operation again.

Thanks in advance for any insights.
 
I'm curious to see the output of running journalctl -kf on the node while you reproduce this issue. Run it just before.
VM host Windows Server
I'm a bit confused by that. Is this a nested setup?
 
Unfortunately, journalctl on the proxmox host isn't showing anything useful (one on one copy):

VM 100 is the VM

Code:
Mar 05 16:53:03 pve qm[2664992]: VM 100 started with PID 2665015.
Mar 05 16:53:03 pve qm[2664991]: <root@pam> end task UPID:pve:0028AA20:179E051B:69A9A6DC:qmstart:100:root@pam: OK
Mar 05 16:53:03 pve pvedaemon[1694724]: <root@pam> starting task UPID:pve:0028AAC8:179E0638:69A9A6DF:vncproxy:100:root@pam:
Mar 05 16:53:03 pve pvedaemon[2665160]: starting vnc proxy UPID:pve:0028AAC8:179E0638:69A9A6DF:vncproxy:100:root@pam:
Mar 05 16:53:03 pve pvedaemon[2665166]: starting vnc proxy UPID:pve:0028AACE:179E0647:69A9A6DF:vncproxy:100:root@pam:
Mar 05 16:53:03 pve pvedaemon[1694754]: <root@pam> starting task UPID:pve:0028AACE:179E0647:69A9A6DF:vncproxy:100:root@pam:
Mar 05 16:53:03 pve pvedaemon[1694724]: <root@pam> end task UPID:pve:0028AAC8:179E0638:69A9A6DF:vncproxy:100:root@pam: OK
Mar 05 16:54:44 pve pvedaemon[1694754]: <root@pam> end task UPID:pve:0028AACE:179E0647:69A9A6DF:vncproxy:100:root@pam: OK
Mar 05 16:54:51 pve pvedaemon[2665463]: starting vnc proxy UPID:pve:0028ABF7:179E3083:69A9A74B:vncproxy:100:root@pam:
Mar 05 16:54:51 pve pvedaemon[1694754]: <root@pam> starting task UPID:pve:0028ABF7:179E3083:69A9A74B:vncproxy:100:root@pam:
-- Boot 2090ef12306a4a638bb7fcbbd1fd3fd6 --
Mar 06 08:08:34 pve kernel: Linux version 6.8.12-17-pve (build@proxmox) (gcc (Debian 12.2.0-14+deb12u1) 12.2.0, GNU ld (GNU Binutils for Debian) 2.40) #1 SMP>
Mar 06 08:08:34 pve kernel: Command line: BOOT_IMAGE=/boot/vmlinuz-6.8.12-17-pve root=/dev/mapper/pve-root ro quiet
Mar 06 08:08:34 pve kernel: KERNEL supported cpus:
Mar 06 08:08:34 pve kernel:   Intel GenuineIntel
Mar 06 08:08:34 pve kernel:   AMD AuthenticAMD
Mar 06 08:08:34 pve kernel:   Hygon HygonGenuine
Mar 06 08:08:34 pve kernel:   Centaur CentaurHauls
Mar 06 08:08:34 pve kernel:   zhaoxin   Shanghai
Mar 06 08:08:34 pve kernel: BIOS-provided physical RAM map:
Mar 06 08:08:34 pve kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable
Mar 06 08:08:34 pve kernel: BIOS-e820: [mem 0x00000000000a0000-0x00000000000fffff] reserved
....

"VM host Windows Server"
Sorry, the host is proxmox, the VM is Windows Server
 
Last edited:
When running testdisk on the first boot of the clone, this is the result:

1772798908670.png

1772799521260.png

It would be nice if i could recover the [Data] partition
 
Last edited:
okay and what shows io in pve when you try to move the partition in your vm,
i think you have an io problem with this consumer ssd