Disck lost

orosmannaro

Well-Known Member
Jan 6, 2013
54
2
48
For some reasons my LVM storage was not related to the ISCSI connection in my qnap server.

So, from the web interface I deleted the LVM and ISCSI storage.

Then I recreated the ISCSI connection but when I try to recreate the LVM connection I had this error:

Code:
create storage failed: device '/dev/disk/by-id/scsi-36e843b689869ccbd2947d4833db6fcdc' is already used by volume group 'mydatavm' (500)

So I deleted volume group mydatavm:

Code:
# lvremove vgremove pvremove

vgremove mydatavm
Do you really want to remove volume group "mydatavm" containing 6 logical volumes? [y/n]: y
Do you really want to remove and DISCARD logical volume vm-106-disk-1? [y/n]: y
  Logical volume "vm-106-disk-1" successfully removed
Do you really want to remove and DISCARD logical volume vm-107-disk-1? [y/n]: y
  Logical volume "vm-107-disk-1" successfully removed
Do you really want to remove and DISCARD logical volume vm-100-disk-1? [y/n]: y
  Logical volume "vm-100-disk-1" successfully removed
Do you really want to remove and DISCARD logical volume vm-101-disk-1? [y/n]: y
  Logical volume "vm-101-disk-1" successfully removed
Do you really want to remove and DISCARD logical volume vm-105-disk-1? [y/n]: y
  Logical volume "vm-105-disk-1" successfully removed
Do you really want to remove and DISCARD logical volume vm-102-disk-1? [y/n]: y
  Logical volume "vm-102-disk-1" successfully removed
  Volume group "mydatavm" successfully removed

and

Code:
pvremove /dev/sdb

Now I recreate the ISCSI and LVM storage and all work fine.

But now I don't see my old VM disc in LVM. How can I restore them?
 
you removed all vm-disks and wonder, why they disappear??

Because on serverfault.com I read this:

Does 'lvremove' destroy data?
...
No, lvremove only destroys the metadata identifying the logical volume and the specific extents which it used. It is possible to recover the data that used to be in the volume, if specific steps were not taken to destroy it.


My fault...
 
I'm try to restore with vgcfgrestore.

I'm finding the UUID of the new LVM I created, but lvdisplay doesn't show it.

Is it possible that when I created the LVM store proxmox define it without "initialize" it?
 
Last edited:
With vgcfgrestore and data on /etc/lvm/archive I was able to recreate the logical volume, using the backup file with some editing (the name and the UUID of the Volume Group and the UUID of the Physical Volume):

Code:
vgcfgrestore mynewvg -f /etc/lvm/archive/editedbackup_recovery.vg

Then I activate the Logical Volume:

Code:
vgchange -a n mynewvg

But if I start a vm boot fail.

I try to check the vm partition table, but the disk doesn't contain a valid partition table:

Code:
fdisk -l /dev/mynewvg/vm-100-disk-1

Disk /dev/mynewvg/vm-100-disk-1: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 1048576 bytes / 1048576 bytes
Disk identifier: 0x00000000

Disk /dev/mynewvg/vm-100-disk-1 doesn't contain a valid partition table

May be now you can help me??
I hope so very much!

Thank you.

NOTE

hexedit /dev/sdb show a lot of data: may be my vm stay there (?)
 
Last edited:
With vgcfgrestore and data on /etc/lvm/archive I was able to recreate the logical volume, using the backup file with some editing (the name and the UUID of the Volume Group and the UUID of the Physical Volume):

Code:
vgcfgrestore mynewvg -f /etc/lvm/archive/editedbackup_recovery.vg

Then I activate the Logical Volume:

Code:
vgchange -a n mynewvg

But if I start a vm boot fail.

I try to check the vm partition table, but the disk doesn't contain a valid partition table:

Code:
fdisk -l /dev/mynewvg/vm-100-disk-1

Disk /dev/mynewvg/vm-100-disk-1: 53.7 GB, 53687091200 bytes
255 heads, 63 sectors/track, 6527 cylinders, total 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 1048576 bytes / 1048576 bytes
Disk identifier: 0x00000000

Disk /dev/mynewvg/vm-100-disk-1 doesn't contain a valid partition table

May be now you can help me??
I hope so very much!

Thank you.

NOTE

hexedit /dev/sdb show a lot of data: may be my vm stay there (?)
Hi,
what is the output of lvs ?

Perhaps you can try to recover an partition with testdisk, but it's strange that no partition-table is on the LVs...
Your PV is the same - or better ask - with the same size (like partition/ or whole disk)?
EG. if the old partition starts at sector 2048 the new one should be the same.

If you want to be on the save side, you should copy the raw vm-disk to an save place (for backup or for trying there the repair).
Like "dd if=/dev/mynewvg/vm-100-disk-1 of=/mnt/backupdisk/vm-100-disk-1.raw bs=1M".

Depends on your VM, you can look if this is the right data inside the VM with trying to recover images from the LV with "photorec".
Perhaps you find jpg-files, where you know that there are on this VM. Then you know you are on the right path.

Udo
 
Hi Udo.

First of all: thanks a lot for your help.

I spent the weekend restoring backup data.

Now all is working fine, but there are one vm that I would like to repair, if it could be possible.

This is the situation.

Proxmox use a Qnap server as storage device through a ISCSI connection.

The Qnap server has 2 LUN (the OLD one and a NEW one). Proxmox uses this LUN as LVM storage.

The NEW LVM now stores the vms builded with restored backup data.

Now I want to try to recover one vm from the OLD LVM.

Qnap server tell me that OLD LUN in empty, but hexedit tell me that there are some data inside this Proxmox LVM storage.

This are the metadata of the deleted vm stored on the OLD LVM:

Code:
   logical_volumes {

     vm-106-disk-1 {
       id = "pLd4OG-mxI4-6WM9-mzcA-QBN1-RdMf-RrqLqt"
       status = ["READ", "WRITE", "VISIBLE"]
       flags = []
       tags = ["pve-vm-106"]
       creation_host = "cloud"
       creation_time = 1469802170   # 2016-07-29 16:22:50 +0200
       segment_count = 1

       segment1 {
         start_extent = 0
         extent_count = 25600   # 100 Gigabytes

         type = "striped"
         stripe_count = 1   # linear

         stripes = [
           "pv0", 0
         ]
       }
     }


What if I create a VM_NEW on NEW LVM with the same characteristics of the deleted one and the with dd command I try to copy some data of the OLD LVM overriding data of this VM_NEW?
 
Hi,
unfortunality the metadata don't show where the LV are on the volume...
One idea - are you sure, that the filter in /etc/lvm/lvm.conf don't block the PV?

What is the output of
Code:
blkid
Does testdisk find anything on your "empty" lvm-partition?

Udo
 
Here I am...

So, this is blkid output:

Code:
# blkid
/dev/sda1: UUID="b5cdde36-f525-4a85-aeea-e573ff50aa2f" TYPE="ext3"
/dev/sda2: UUID="IZM7cK-170y-2K9F-KCe8-H0Ct-PkUc-N1FlfM" TYPE="LVM2_member"
/dev/mapper/pve-root: UUID="c90b8b31-a8c1-427d-a136-dac09f25112f" TYPE="ext3"
/dev/mapper/pve-swap: UUID="5486f562-58ba-488e-9f47-2de0e059e652" TYPE="swap"
/dev/mapper/pve-data: UUID="1bf77a04-90ec-47bd-aa88-d000f21a72ca" TYPE="ext3"
/dev/sdb: UUID="UeDwrj-uef6-tEGu-oRY5-5noZ-0yAO-5MT5oT" TYPE="LVM2_member"
/dev/sdc: UUID="gv7nLt-BcOx-wG1v-KuPM-Mbwl-pE90-wkCPde" TYPE="LVM2_member"

I'm new with testdisk so I must read some documentation...

Meanwhile this is testdisk /list output:

Code:
# testdisk /list
TestDisk 6.13, Data Recovery Utility, November 2011
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org
Please wait...

Disk /dev/sda - 998 GB / 930 GiB - CHS 121454 255 63, sector size=512

Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 267349 255 63, sector size=512

Disk /dev/sdc - 2199 GB / 2048 GiB - CHS 267349 255 63, sector size=512

Disk /dev/mapper/NEWdata--pxmx0-vm--200--disk--1 - 107 GB / 100 GiB - CHS 209715200 1 1, sector size=512
Disk /dev/mapper/NEWdata--pxmx0-vm--201--disk--1 - 107 GB / 100 GiB - CHS 209715200 1 1, sector size=512

Disk /dev/mapper/pve-data - 844 GB / 786 GiB - CHS 1650229248 1 1, sector size=512
Disk /dev/mapper/pve-root - 103 GB / 96 GiB - CHS 201326592 1 1, sector size=512
Disk /dev/mapper/pve-swap - 33 GB / 31 GiB - CHS 65011712 1 1, sector size=512

Disk /dev/dm-0 - 103 GB / 96 GiB - CHS 201326592 1 1, sector size=512
Disk /dev/dm-1 - 33 GB / 31 GiB - CHS 65011712 1 1, sector size=512
Disk /dev/dm-2 - 844 GB / 786 GiB - CHS 1650229248 1 1, sector size=512
Disk /dev/dm-3 - 107 GB / 100 GiB - CHS 209715200 1 1, sector size=512
Disk /dev/dm-4 - 107 GB / 100 GiB - CHS 209715200 1 1, sector size=512


--------------------------------------------------------------------------------

Disk /dev/sda - 998 GB / 930 GiB - CHS 121454 255 63
  Partition       Start  End  Size in sectors
1 * Linux  0  32 33  65  69  4  1046528
2 P Linux LVM  65  69  5 121454 191 17 1950121984

--------------------------------------------------------------------------------

Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 267349 255 63
  Partition       Start  End  Size in sectors
  P Linux LVM2  0  0  1 267349  89  4 4294967296

--------------------------------------------------------------------------------

Disk /dev/sdc - 2199 GB / 2048 GiB - CHS 267349 255 63
  Partition       Start  End  Size in sectors
  P Linux LVM2  0  0  1 267349  89  4 4294967296

--------------------------------------------------------------------------------

Disk /dev/mapper/NEWdata--pxmx0-vm--200--disk--1 - 107 GB / 100 GiB - CHS 209715200 1 1
  Partition       Start  End  Size in sectors
1 * Linux  2048  499711  497664

Warning: Bad starting sector (CHS and LBA don't match)
2 E extended  501758  209713151  209211394

Warning: Bad starting sector (CHS and LBA don't match)
5 L Linux LVM  501760  209713151  209211392

Warning: Bad starting sector (CHS and LBA don't match)

--------------------------------------------------------------------------------

Disk /dev/mapper/NEWdata--pxmx0-vm--201--disk--1 - 107 GB / 100 GiB - CHS 209715200 1 1
  Partition       Start  End  Size in sectors
1 * Linux  2048  999423  997376

Warning: Bad starting sector (CHS and LBA don't match)
2 E extended  1001470  209713151  208711682

Warning: Bad starting sector (CHS and LBA don't match)
5 L Linux LVM  1001472  209713151  208711680

Warning: Bad starting sector (CHS and LBA don't match)

--------------------------------------------------------------------------------

Disk /dev/mapper/pve-data - 844 GB / 786 GiB - CHS 1650229248 1 1
  Partition       Start  End  Size in sectors
  P ext3  0 1650229247 1650229248

--------------------------------------------------------------------------------

Disk /dev/mapper/pve-root - 103 GB / 96 GiB - CHS 201326592 1 1
  Partition       Start  End  Size in sectors
  P ext3  0  201326591  201326592

--------------------------------------------------------------------------------

Disk /dev/mapper/pve-swap - 33 GB / 31 GiB - CHS 65011712 1 1
  Partition       Start  End  Size in sectors
  P Linux SWAP 2  0  65011711  65011712

--------------------------------------------------------------------------------

Disk /dev/dm-0 - 103 GB / 96 GiB - CHS 201326592 1 1
  Partition       Start  End  Size in sectors
  P ext3  0  201326591  201326592

--------------------------------------------------------------------------------

Disk /dev/dm-1 - 33 GB / 31 GiB - CHS 65011712 1 1
  Partition       Start  End  Size in sectors
  P Linux SWAP 2  0  65011711  65011712

--------------------------------------------------------------------------------

Disk /dev/dm-2 - 844 GB / 786 GiB - CHS 1650229248 1 1
  Partition       Start  End  Size in sectors
  P ext3  0 1650229247 1650229248

--------------------------------------------------------------------------------

Disk /dev/dm-3 - 107 GB / 100 GiB - CHS 209715200 1 1
  Partition       Start  End  Size in sectors
1 * Linux  2048  499711  497664

Warning: Bad starting sector (CHS and LBA don't match)
2 E extended  501758  209713151  209211394

Warning: Bad starting sector (CHS and LBA don't match)
5 L Linux LVM  501760  209713151  209211392

Warning: Bad starting sector (CHS and LBA don't match)
Disk /dev/dm-4 - 107 GB / 100 GiB - CHS 209715200 1 1
  Partition       Start  End  Size in sectors
1 * Linux  2048  999423  997376

Warning: Bad starting sector (CHS and LBA don't match)
2 E extended  1001470  209713151  208711682

Warning: Bad starting sector (CHS and LBA don't match)
5 L Linux LVM  1001472  209713151  208711680

Warning: Bad starting sector (CHS and LBA don't match)

I also need some time to understand how to check il the filter in /etc/lvm/lvm.conf don't block the PV :)


Thanks!
 
Look with pvs

Code:
PV  VG  Fmt  Attr PSize  PFree
/dev/sda2  pve  lvm2 a--  929,89g 16,00g
/dev/sdb  OLDdata_lvm  lvm2 a--  2,00t  2,00t
/dev/sdc  NEWdata-pxmx0 lvm2 a--  2,00t  1,61t



And run testdisk on the other...

Code:
# cat testdisk.log


Code:
Fri Sep 30 13:54:04 2016
Command line: TestDisk

TestDisk 6.13, Data Recovery Utility, November 2011
Christophe GRENIER <grenier@cgsecurity.org>
http://www.cgsecurity.org
OS: Linux, kernel 2.6.32-46-pve (#1 SMP Tue Jun 28 20:04:58 CEST 2016) x86_64
Compiler: GCC 4.6
Compilation date: 2012-01-17T14:04:23
ext2fs lib: 1.42.5, ntfs lib: 10:0:0, reiserfs lib: none, ewf lib: none
Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512
Hard disk list
Disk /dev/sda - 998 GB / 930 GiB - CHS 121454 255 63, sector size=512 - INTEL RS2WC040, FW:2.13
Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 267349 255 63, sector size=512 - QNAP iSCSI Storage, FW:4.0
Disk /dev/sdc - 2199 GB / 2048 GiB - CHS 267349 255 63, sector size=512 - QNAP iSCSI Storage, FW:4.0
Disk /dev/mapper/NEWdata--pxmx0-vm--200--disk--1 - 107 GB / 100 GiB - CHS 209715200 1 1, sector size=512
Disk /dev/mapper/NEWdata--pxmx0-vm--201--disk--1 - 107 GB / 100 GiB - CHS 209715200 1 1, sector size=512
Disk /dev/mapper/pve-data - 844 GB / 786 GiB - CHS 1650229248 1 1, sector size=512
Disk /dev/mapper/pve-root - 103 GB / 96 GiB - CHS 201326592 1 1, sector size=512
Disk /dev/mapper/pve-swap - 33 GB / 31 GiB - CHS 65011712 1 1, sector size=512
Disk /dev/dm-0 - 103 GB / 96 GiB - CHS 201326592 1 1, sector size=512
Disk /dev/dm-1 - 33 GB / 31 GiB - CHS 65011712 1 1, sector size=512
Disk /dev/dm-2 - 844 GB / 786 GiB - CHS 1650229248 1 1, sector size=512
Disk /dev/dm-3 - 107 GB / 100 GiB - CHS 209715200 1 1, sector size=512
Disk /dev/dm-4 - 107 GB / 100 GiB - CHS 209715200 1 1, sector size=512

Partition table type (auto): None
Disk /dev/sdb - 2199 GB / 2048 GiB - QNAP iSCSI Storage
Partition table type: None

Interface Advanced

LVM2 magic value at 0/0/1
part_size 4294967296

LVM2 magic value at 0/0/1
  P Linux LVM2  0  0  1 267349  89  4 4294967296
  LVM2, 2199 GB / 2048 GiB

Analyse Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 267349 255 63

LVM2 magic value at 0/0/1
part_size 4294967296

LVM2 magic value at 0/0/1
Current partition structure:
  P Linux LVM2  0  0  1 267349  89  4 4294967296

search_part()
Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 267349 255 63

LVM2 magic value at 0/0/1
part_size 4294967296
  Linux LVM2  0  0  1 267349  89  4 4294967296
  LVM2, 2199 GB / 2048 GiB

Results
  P Linux LVM2  0  0  1 267349  89  4 4294967296
  LVM2, 2199 GB / 2048 GiB
Change partition type:
  P Linux LVM2  0  0  1 267349  89  4 4294967296
  LVM2, 2199 GB / 2048 GiB

interface_write()
  P Linux LVM2  0  0  1 267349  89  4 4294967296
Write isn't available because the partition table type "None" has been selected.

search_part()
Disk /dev/sdb - 2199 GB / 2048 GiB - CHS 267349 255 63

LVM2 magic value at 0/0/1
part_size 4294967296
  Linux LVM2  0  0  1 267349  89  4 4294967296
  LVM2, 2199 GB / 2048 GiB

Results
  P Linux LVM2  0  0  1 267349  89  4 4294967296
  LVM2, 2199 GB / 2048 GiB

interface_write()
  P Linux LVM2  0  0  1 267349  89  4 4294967296
Write isn't available because the partition table type "None" has been selected.

TestDisk exited normally.




I'm going to read...


Thanks agian!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!