[SOLVED] Lost the disk for 1 vm: No such volume pve/vm-103-disk-1

Digiones

New Member
Oct 24, 2025
3
1
3
Hi everyone,


I’m working in a test lab environment with a 3-node Proxmox cluster (pve1, pve2, pve3).
Each node has 2 local disks — except pve2, which has 3

I had a local Exchange Server VM (ID 103, named “exch”) running on pve2.
The VM was fully installed (Windows + Exchange) and I was connected to the Exchange Admin Center when the issue happened.


I briefly left the environment running, and when I came back:
  • Two of my nodes (pve2 and pve3) had a red cross in the Proxmox web UI (most likely a temporary network issue).
  • After checking the hosts, I managed to bring all nodes back online and visible again in the cluster.

After the cluster recovered:


  • Both VM 103 (exch) and VM 110 (AD) were no longer responding.
  • I rebooted both:
    • AD (VM 110) came back fine.
    • Exchange (VM 103) failed to boot — the disk seems to be missing.

1761315290312.png

What I Checked​


  • Both VMs were originally created on pve2, using local LVM-Thin storage.
  • They had been migrated between nodes before (via ProxLB VM balancing tool).
  • When running lsblk on all nodes, I don’t see the Exchange VM’s disk anywhere.
  • In the web GUI, the disk appears as missing (gray/unavailable).
  • Removing the EFI disk reference didn’t change anything.
1761315788267.png

1761315814166.png

1761315837230.png

the only thing i see is this on the gui
1761315877764.png



1761316279757.png

1761316309228.png

1761316345774.png


1761316591341.png

1761316617091.png

  • What I’d Like to Know​

    • Is there any way to recover the missing VM disk (or data) from pve2?
    • Or at least, how can I understand what happened — did Proxmox unmap or lose the LVM-thin volume after the network disruption?
    • Any command or log I could check to confirm whether the LV still exists but is detached or corrupt?

Thanks in advance
 
Hi,

It turned out that my issue wasn’t actually a missing disk, but a failed migration.
I’m still a bit confused about what exactly happened and why I couldn’t see the disk anywhere at first.

Anyway, I hadn’t seen the thread link during my research but after manually moving the file, the VM was able to start again, and the disk finally appeared in lsblk.


Thanks for the help!
 
I’m still a bit confused about what exactly happened and why I couldn’t see the disk anywhere at first.
You configured HA and the VM was automatically recovered to another node after the node it was currently running on failed. But the VM has disks on a local storage which is not supported with HA. HA requires shared or replicated storage.
 
You configured HA and the VM was automatically recovered to another node after the node it was currently running on failed. But the VM has disks on a local storage which is not supported with HA. HA requires shared or replicated storage.
That was the root cause.
Due to hardware limitations, I wasn’t able to host all VMs on the NAS or Ceph configuration, so I ended up using local storage for some of them.
I won't forget to remove this vms from HA next time

Thank you again for your time.
 
  • Like
Reactions: UdoB