Need Help With Recovering Data

Abstract3000

Member
Feb 21, 2019
23
1
8
44
So I installed a New VM for Home Assistant but Accidently put a VM Number 101 That Already Existed on The Machine. It overwrote it, there was Issues with the Script install so I deleted the VM and recreated.

What I didn't notice was VM101 was already Taken as it was a VM not An LXC so was not at the top. This particular VM was a Synology VM with 2 6TB Drives in a RAID1 configuration, When I noticed it was gone I looked down at the Storage Pools for NAS1 & NAS2 (Respectively) they are no longer full but rather showing .57% Used on Both Drives. I restored a Backup to the VM with the Synology, Added the Detached Device (From the Hardware Tab) and booted it up, but it does not appear to have site of the drives?

Did the 2 Drives get wiped out? Is it possible that Data is still on those Drives? Any way to get the Restored VM to recognize and reattach?

So A Bit More Info
Code:
lsblk -fs
NAS--Storage-vm--101--disk--0                                                                                        
└─sdb1                         LVM2_member LVM2 001              xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx              
  └─sdb                                                                                                              
NAS--Storage2-vm--101--disk--0                                                                                        
└─sde1                         LVM2_member LVM2 001              xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx                
  └─sde

Listed in the 101.conf File:
sata2: nasDrive1:vm-101-disk-0,size=32G
sata3: nasDrive2:vm-101-disk-0,size=32G

That's what is throwing me is these Disks are 6 (5.5T) Each

Here is shows that info:
Code:
sdb                                  8:16   0   5.5T  0 disk
└─sdb1                               8:17   0   5.5T  0 part
  └─NAS--Storage-vm--101--disk--0  253:1    0    32G  0 lvm
sdc                                  8:32   0   1.8T  0 disk /mnt/data2
sdd                                  8:48   1  28.9G  0 disk /mnt/data
sde                                  8:64   0   5.5T  0 disk
└─sde1                               8:65   0   5.5T  0 part
  └─NAS--Storage2-vm--101--disk--0 253:0    0    32G  0 lvm
 
Last edited:
So I suppose This is a tough one, If I were to Start out Small here in getting understanding of What the current state of the drive is I see with the following:
Code:
sda                                  8:0    0 931.5G  0 disk
└─sda1                               8:1    0 931.5G  0 part
  └─Data--Storage-vm--105--disk--0 253:2    0   931G  0 lvm
sdb                                  8:16   0   5.5T  0 disk
└─sdb1                               8:17   0   5.5T  0 part
  └─NAS--Storage-vm--101--disk--0  253:0    0    32G  0 lvm
sdc                                  8:32   0   1.8T  0 disk /mnt/data2
sdd                                  8:48   1  28.9G  0 disk /mnt/data
sde                                  8:64   0   5.5T  0 disk
└─sde1                               8:65   0   5.5T  0 part
  └─NAS--Storage2-vm--101--disk--0 253:1    0    32G  0 lvm
2 of the Drives Attached to the VM 101
1 Drive Attached to the VM 105
The Disk & Part both = 5.5T but only 32G for the LVM (VM 101)
The Disk & Part = 931.5G & 931G for the LVM (VM 105)

Now if I go to Datacenter -> LVM
I noticed the Following
Capture.JPG
100% is allocated to the VM105 yet only 1% is Allocated to VM101

This is now Flipped The allocation used to be 100% for VM 101 as well. So what I am trying to figure out here is Why is Proxmox only Allocating 32G to the VM when attached, does the other 99% of this drive still retain all my Data? Is there some command on the backend I need to do so that Proxmox looks at the other 99% rather than the 1% and attaches that?
 
I'm sorry, I don't quite understand what you did. How did you overwrite your VM? That requires a special flag to be set, which you should only do if you have a very good reason.

If the backup only contains 32GB drives, you probably didn't backup the larger ones. Also, how did you get a detached disk? DId you just delete the conf file? If yes, reattaching the the disk should work.

Edit: Output of qm config VMID could also be interesting
 
Last edited:
@Matthias.
Thank you for taking the time to reply greatly appreciated :)

so to overwrite what I mean by that is I ran this particular Script to install Home Assistant:
Code:
bash -c "$(wget -qLO - https://github.com/tteck/Proxmox/raw/main/vm/haos-vm-v4.sh)"

I made the stupid mistake of using VM 101 instead of taking the suggestion as i looked at the server and didn't see a current VM 101, I really need to look harder and for that I will always just take the suggestion regardless from now on.

So I have tried not to be To abrasive with this due to my limited understanding of Why it is happening, but both Disks are 5.5T Drives though they now only seem to be recognized when attaching them to the VM as 32g Drives. here is the command you wanted to see:
Code:
root@server:~# qm config 101
args: -device 'piix3-usb-uhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/images/101/synoboot.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot'
balloon: 0
bios: ovmf
boot: cdn
bootdisk: sata1
cores: 4
efidisk0: local:101/vm-101-disk-0.qcow2,size=128K
ide2: none,media=cdrom
memory: 2048
name: XpenoDsm61x
net0: e1000=76:F2:61:2E:9A:3A,bridge=vmbr0,link_down=1
net1: e1000=C6:34:21:B2:EA:FF,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
sata0: local:101/vm-101-disk-2.qcow2,size=26G
sata1: local:101/vm-101-disk-3.raw,size=50M
sata2: nasDrive2:vm-101-disk-0,size=32G
scsi2: /dev/disk/by-id/ata-WDC_WD60EFRX-68L0BN1_WD-WX41D985F598,size=5860522584K
smbios1: uuid=b4af0a37-fdcb-4697-a69d-83ec28b114f5
sockets: 1
unused0: local:101/vm-101-disk-1.raw
vmgenid: 1f4b7aa3-c317-4fc2-a7ba-edd62b09a2d5

so since I was having difficulty and It seemed like there was going to be little help I kept reading on and trying to find a Solution, (I left HDD2 alone) HDD1 however I was able to look at the snapshots of metadata and found the snapshot that was taken right before I ran the Script mentioned above and I attempted an vgcfgrestore it was a part of LVM NAS-Storage after the Restore though successful (I ran --test prior) it no longer shows under the lvs:
Code:
root@server:/etc/lvm/backup# lvs
  LV            VG           Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  vm-105-disk-0 Data-Storage -wi-ao---- 931.00g
  vm-101-disk-0 NAS-Storage2 -wi-------  32.00g
  vm-110-disk-0 NAS-Storage2 -wi-a-----  32.00g
  vm-111-disk-0 NAS-Storage2 -wi-a-----  32.00g

Though still appears under vgs:
Code:
root@server:/etc/lvm/backup# vgs
  VG           #PV #LV #SN Attr   VSize    VFree
  Data-Storage   1   1   0 wz--n- <931.51g 520.00m
  NAS-Storage    1   0   0 wz--n-   <5.46t  <5.46t
  NAS-Storage2   1   3   0 wz--n-   <5.46t   5.36t

I tried mounting it to the VM after doing so just to see if I could get the VM to at least recognize it as a larger disk but in this case it doesn't even see that HHD1 anymore, just HDD2 and at 32g.

I'm not sure how to even approach this at this point, and thinking the drive will need to be pulled and recovery software used in hopes of Success, but if anyone knows the adequate commands to glean any additional info I will gladly get all outputs, i was hoping it was a stupid simple fix that could easily be addressed as an oversight to my lack of knowledge.

Any assistance would be greatly appreciated :)
 
I made the stupid mistake of using VM 101 instead of taking the suggestion as i looked at the server and didn't see a current VM 101, I really need to look harder and for that I will always just take the suggestion regardless from now on.
qm (our CLI tool for vm management) won't allow you to create a VM with an ID that is already used by a different ID. So it seems the script failed as expected and you deleted the VM. Either way, with issues with that script, please contact the creator of the script (I believe it's @tteckster )

Proxmox creates virtual disks on your physical disks. What the VM sees is entirely dependant on your configuration of these virtual disks
sata2: nasDrive2:vm-101-disk-0,size=32G means you've created a 32G virtual disk on your physical disk at sata2. If you want to use the whole disk exclusively for that VM, you could also use passthrough. FYI, The size parameter in the config doesn't have an effect on the actual size of the disk.

Though still appears under vgs:
I'm not that versed with LVMTHIN, but shouldn't you run lvscan rather than lvs?

As I probably mentioned before, the easiest way would be to restore the VM from a backup.

I hope that helps you.
 
When the script receives an error, it will run a cleanup

Bash:
function cleanup_vmid() {
  if $(qm status $VMID &>/dev/null); then
    if [ "$(qm status $VMID | awk '{print $2}')" == "running" ]; then
      qm stop $VMID
    fi
    qm destroy $VMID
  fi
}

So, you received an error

Code:
 - Creating HAOS VM...unable to create VM 101 - VM 101 already exists on node 'server'
‼ ERROR 255@281 Unknown failure occurred.
  Logical volume "vm-101-disk-1" successfully removed
  Logical volume "vm-101-disk-0" successfully removed

Nothing left behind

What I didn't expect is for someone to try and use an existing VMID (it does state "Advanced"). I guess I can add a check for that, something like

Bash:
USEDID=$(pvesh get /cluster/resources --type vm --output-format yaml | egrep -i 'vmid' | awk '{print substr($2, 1, length($2)-0) }')

if echo "$USEDID" | egrep -q "$VMID"
then
  echo -e "ID $VMID is already in use"
  echo -e "Exiting Script"
  sleep 2;
  exit
fi

Before trying to create the VM

Simple case of PEBKAC
 
Last edited:
I get it, it Was Completely my fault was just hoping my screw up wasn't as monumental as it was. Thanks for Clarifying the Data was deleted, Though I had somewhat already came to that conclusion after almost a week.

I ended up extracting one of the drives as they were originally mirrored and using a professional recovery Software to extract the data lost, Between the recovered files and the data i had backed up I have most of everything recovered, it has just been an extremely long recovery process of setting everything back up and transferring/organizing is all.

In the future I will take Proxmox default for assigned VM numbers and ensure everything on those drives are backed up respectively in order.

Though I do have a question I now have 2 VM's VM101 & VM111, I ended up building a new VM with more updated software and passed through the Drives to it. Everything is working in those regards. Though the issue is 1 of the drives is still passed through only 32g mind you to the 101 VM, it has been "Detached" and the VM is just stopped. Though i want to delete that VM it brings up a warning it will try to delete the HDD again so I have left it alone. Any way of detaching that Drive from he original VM no longer being used without deleting it? Do I just wipe the line of the of the configuration file?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!