qcow2 missing power loss help needed

pjman7

New Member
Dec 22, 2024
7
0
1
I believe my house lost power and server shut down but it didnt come back up due to a hardware thing where I and required to hit f1 to continue the boot when it booted up none of my vms started.

I had it list the vms and I started the one important one 1032 but it didnt start because it was locked. I unlocked it and tried starting it again it failed to find the disk when i went to the mount point where that disk was all the disks for that vm were gone! and the disk is essentially empty the 2 disks on this vm accounted for over 2 tb of disk use it was the only vm i was running from it.

I think what might have happened is I recently tried cleaning up snapshots to create a fresh one and it failed and locked the vm but it stayed running but on the reboot and subsequent unlock proxmox deleted all the disks

is there a way to recover from this ?
 
not sure what to do i have tried searching for deleted files etc or trying to use things like testdisk which i cant even get to download to install to try and debugfs and tried navigating to the directory where it should be and see if it sees any hidden files or something but no luck

i figured I can just find the files somewhere on the disk and remove the . to restore them but its not going well
 
First question is, how were you cleaning up Snapshots? From the Proxmox GUI?
If this is the case, Proxmox will never delete the VM disk image. Because from the Datacenter > Node > VM > Snapshot interface, you simply do not have access into the VM disk image itself.

If you were cleaning up Snapshots (ZFS) manually, then yes you may have deleted the main disk image as well. The VM will keep running in most cases for little while even if you accidentally delete the disk image, because the OS is somewhat present in the memory. It just wont accept or write new data.
 
Invest in a UPS. If you don't have backups, you're going to have to rebuild the VM.

Then start doing regular backups.
Very true.

You can also mitigate issues due to power failure, by using safer Cache mode such as WriteThrough or DirectSync at the cost of performance. Raw disk image will also shield you from some of the power or abrupt shutdown related issues.
XFS file system is notorious when it comes to corruption due to power failure. If you are not going to invest in an UPS that will auto shutdown nodes during a power failure, then use ext4 file system for the VM.
 
Invest in a UPS. If you don't have backups, you're going to have to rebuild the VM.

Then start doing regular backups.
thanks not really helpful tbh I had a battery backup it failed. I was taking backups of the OS partition the 2nd partition which was another image was over 2 tb large I did not have the storage to back that part up which all the data is recoverable as long as I had the OS.

I need help knowing where to go in Proxmox to look at logs my guess is when trying to clean up snapshots within the web gui and it failing putting the VM into a locked mode caused the issue when it got rebooted it saw the disks were not locked so it deleted the image files. or it did it once I tried unlocking the vm through terminal and then trying to start it to at which point it said the disks didnt exist anymore
 
First question is, how were you cleaning up Snapshots? From the Proxmox GUI?
If this is the case, Proxmox will never delete the VM disk image. Because from the Datacenter > Node > VM > Snapshot interface, you simply do not have access into the VM disk image itself.

If you were cleaning up Snapshots (ZFS) manually, then yes you may have deleted the main disk image as well. The VM will keep running in most cases for little while even if you accidentally delete the disk image, because the OS is somewhat present in the memory. It just wont accept or write new data.
I was doing it through the web gui not through the terminal the only time i used the terminal was this morning to unlock the vm bc it wouldnt boot and I couldnt get into the webgui so all i had access to was the terminal. I wished i had bc at least I would have known if the disks were actually still there and taking space on the drive they were located on
 
Very true.

You can also mitigate issues due to power failure, by using safer Cache mode such as WriteThrough or DirectSync at the cost of performance. Raw disk image will also shield you from some of the power or abrupt shutdown related issues.
XFS file system is notorious when it comes to corruption due to power failure. If you are not going to invest in an UPS that will auto shutdown nodes during a power failure, then use ext4 file system for the VM.
I believe the VM was using writethrough its annoying when i built this server i had 1 power problem in 10 years living at the place i was I had to move and it became common brown out or out for a few minutes which is why i have tried buying a few different ones and having issues with them able to support the load of the server.

since the vm images were on its own separate drive I am kinda hoping there is a way to recover the image files. I would love to see if I could find the logs to where I tried cleaning snapshots and see what the error was that made it fail. Im sure if i didnt have this power issue now at some point i would have restarted and been left with the same issue :(
 
Have you checked if the physical disk is really being mounted after the reboot? If the disk is not mounted, the mountpoint will obviously be empty with nothing on it. That could be plausible reason why you are not seeing disk images. What do you see if you run:
df -H
 
Have you checked if the physical disk is really being mounted after the reboot? If the disk is not mounted, the mountpoint will obviously be empty with nothing on it. That could be plausible reason why you are not seeing disk images. What do you see if you run:
df -H
inside proxmox cli it shows

/dev/nvme0n1p1 3.2T 40G 3.2T 2% /mnt/pve/VM-Disk
/dev/nvme1n1p1 3.2T 14G 3.0T 1% /mnt/pve/VM-Disk2
which are my 2 main storage disks

and here is the config file for the vm
balloon: 2048
boot: order=scsi0;net0
cores: 6
memory: 49150
meta: creation-qemu=6.2.0,ctime=1663515855
name: eth2-prox
net0: vmxnet3=B6:27:39:95:88:F0,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: fixed04092024
scsi0: VM-Disk3:103/vm-103-disk-0.qcow2,discard=on,size=50G
scsi1: VM-Disk3:103/vm-103-disk-1.qcow2,backup=0,cache=writethrough,discard=on,size=2000G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=68370242-2d1a-4375-8ab1-bf71f80f2dac
sockets: 3
startup: up=5
vmgenid: dfc8e04e-c59a-4ca4-9e12-8e65a21c96fe

[fixed04092024]
#After Spectrum Internet Issues
balloon: 2048
boot: order=scsi0;net0
cores: 6
memory: 49150
meta: creation-qemu=6.2.0,ctime=1663515855
name: eth2-prox
net0: vmxnet3=B6:27:39:95:88:F0,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: VM-Disk3:103/vm-103-disk-0.qcow2,discard=on,size=50G
scsi1: VM-Disk3:103/vm-103-disk-1.qcow2,backup=0,cache=writethrough,discard=on,size=2000G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=68370242-2d1a-4375-8ab1-bf71f80f2dac
snapstate: delete
snaptime: 1712710238
sockets: 3
startup: up=5
vmgenid: dfc8e04e-c59a-4ca4-9e12-8e65a21c96fe


I provided snapshots inside the gui do you can understand the disks unfortunately it looks like where the images were is mounted
 

Attachments

  • proxmox disks 2.PNG
    proxmox disks 2.PNG
    50.8 KB · Views: 2
  • proxmox disks.PNG
    proxmox disks.PNG
    40.2 KB · Views: 2
  • proxmox dc disks.PNG
    proxmox dc disks.PNG
    47.3 KB · Views: 2
inside proxmox cli it shows

/dev/nvme0n1p1 3.2T 40G 3.2T 2% /mnt/pve/VM-Disk
/dev/nvme1n1p1 3.2T 14G 3.0T 1% /mnt/pve/VM-Disk2
which are my 2 main storage disks

and here is the config file for the vm
balloon: 2048
boot: order=scsi0;net0
cores: 6
memory: 49150
meta: creation-qemu=6.2.0,ctime=1663515855
name: eth2-prox
net0: vmxnet3=B6:27:39:95:88:F0,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
parent: fixed04092024
scsi0: VM-Disk3:103/vm-103-disk-0.qcow2,discard=on,size=50G
scsi1: VM-Disk3:103/vm-103-disk-1.qcow2,backup=0,cache=writethrough,discard=on,size=2000G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=68370242-2d1a-4375-8ab1-bf71f80f2dac
sockets: 3
startup: up=5
vmgenid: dfc8e04e-c59a-4ca4-9e12-8e65a21c96fe

[fixed04092024]
#After Spectrum Internet Issues
balloon: 2048
boot: order=scsi0;net0
cores: 6
memory: 49150
meta: creation-qemu=6.2.0,ctime=1663515855
name: eth2-prox
net0: vmxnet3=B6:27:39:95:88:F0,bridge=vmbr0
numa: 0
onboot: 1
ostype: l26
scsi0: VM-Disk3:103/vm-103-disk-0.qcow2,discard=on,size=50G
scsi1: VM-Disk3:103/vm-103-disk-1.qcow2,backup=0,cache=writethrough,discard=on,size=2000G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=68370242-2d1a-4375-8ab1-bf71f80f2dac
snapstate: delete
snaptime: 1712710238
sockets: 3
startup: up=5
vmgenid: dfc8e04e-c59a-4ca4-9e12-8e65a21c96fe


I provided snapshots inside the gui do you can understand the disks unfortunately it looks like where the images were is mounted
heres the lsblk as well
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 185.8G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part /boot/efi
└─sda3 8:3 0 185.2G 0 part
├─pve-swap 253:0 0 24G 0 lvm [SWAP]
├─pve-root 253:1 0 46.3G 0 lvm /
├─pve-data_tmeta 253:2 0 1G 0 lvm
│ └─pve-data-tpool 253:4 0 97G 0 lvm
│ └─pve-data 253:5 0 97G 1 lvm
└─pve-data_tdata 253:3 0 97G 0 lvm
└─pve-data-tpool 253:4 0 97G 0 lvm
└─pve-data 253:5 0 97G 1 lvm
sdb 8:16 0 185.8G 0 disk
├─sdb1 8:17 0 1007K 0 part
├─sdb2 8:18 0 512M 0 part
└─sdb3 8:19 0 184.5G 0 part
sr0 11:0 1 1024M 0 rom
nvme1n1 259:0 0 2.9T 0 disk
└─nvme1n1p1 259:2 0 2.9T 0 part /mnt/pve/VM-Disk2
nvme0n1 259:1 0 2.9T 0 disk
└─nvme0n1p1 259:3 0 2.9T 0 part /mnt/pve/VM-Disk

and fdisk
Disk /dev/nvme1n1: 2.91 TiB, 3200631791616 bytes, 6251233968 sectors
Disk model: 7335943:ICDPC5ED2ORA6.4T
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 48C896CD-6C5B-45BF-8334-03481C64856F

Device Start End Sectors Size Type
/dev/nvme1n1p1 2048 6251233934 6251231887 2.9T Linux filesystem


Disk /dev/nvme0n1: 2.91 TiB, 3200631791616 bytes, 6251233968 sectors
Disk model: 7335943:ICDPC5ED2ORA6.4T
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 7ACC2B1F-4FE6-479F-B119-1F3B7011EB58

Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 6251233934 6251231887 2.9T Linux filesystem


Disk /dev/sda: 185.75 GiB, 199447543808 bytes, 389545984 sectors
Disk model: PERC H710P
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 6B032A2B-4307-4B9C-B386-C03D60025859

Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 389545950 388495327 185.2G Linux LVM


Disk /dev/sdb: 185.75 GiB, 199447543808 bytes, 389545984 sectors
Disk model: PERC H710P
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CA124B59-7DDF-453B-96A4-29B27C5C110E

Device Start End Sectors Size Type
/dev/sdb1 34 2047 2014 1007K BIOS boot
/dev/sdb2 2048 1050623 1048576 512M EFI System
/dev/sdb3 1050624 387973120 386922497 184.5G Linux filesystem


Disk /dev/mapper/pve-swap: 24 GiB, 25769803776 bytes, 50331648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/pve-root: 46.25 GiB, 49660559360 bytes, 96993280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!