pve-data missing after reboot

kotakomputer

Renowned Member
May 14, 2012
429
13
83
Jakarta, Indonesia
www.proxmoxindo.com
After accidentally reboot, pve-data is missing.

Here is the lsblk:

Code:
root@pve:~# lsblk
NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda            8:0    0  1.8T  0 disk
├─sda1         8:1    0 1007K  0 part
├─sda2         8:2    0    1G  0 part /boot/efi
└─sda3         8:3    0  1.8T  0 part
  ├─pve-swap 253:0    0    8G  0 lvm  [SWAP]
  └─pve-root 253:1    0   96G  0 lvm  /
sr0

Code:
root@pve:~# vgchange -ay
  Check of pool pve/data failed (status:1). Manual repair required!
  2 logical volume(s) in volume group "pve" now active
root@pve:~#
root@pve:~#
root@pve:~# lvscan
  ACTIVE            '/dev/pve/swap' [8.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [96.00 GiB] inherit
  inactive          '/dev/pve/data' [<1.67 TiB] inherit
  inactive          '/dev/pve/vm-108-disk-0' [100.00 GiB] inherit
  inactive          '/dev/pve/vm-109-disk-0' [300.00 GiB] inherit
  inactive          '/dev/pve/vm-118-disk-0' [40.00 GiB] inherit
  inactive          '/dev/pve/vm-119-disk-0' [50.00 GiB] inherit
  inactive          '/dev/pve/vm-120-disk-0' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-120-disk-1' [250.00 GiB] inherit
  inactive          '/dev/pve/vm-120-disk-2' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-102-disk-0' [100.00 GiB] inherit
  inactive          '/dev/pve/vm-105-disk-0' [100.00 GiB] inherit
  inactive          '/dev/pve/vm-100-disk-0' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-100-disk-1' [250.00 GiB] inherit
  inactive          '/dev/pve/vm-100-disk-2' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-103-disk-0' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-103-disk-1' [250.00 GiB] inherit
  inactive          '/dev/pve/vm-103-disk-2' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-103-disk-3' [50.00 GiB] inherit
  inactive          '/dev/pve/vm-106-disk-0' [100.00 GiB] inherit
root@pve:~#


Code:
root@pve:~# lvconvert --repair pve/data
Child 30616 exited abnormally
  Repair of thin metadata volume of thin pool pve/data failed (status:-1). Manual repair required!

Any idea to fix? Thanks
 
# pvscan -vvv

Code:
root@pve:~# pvscan -vvv
  Parsing: pvscan -vvv
  Recognised command pvscan_display (id 123 / enum 105).
  Sysfs filter initialised.
  Internal filter initialised.
  Regex filter initialised.
  LVM type filter initialised.
  Usable device filter initialised (scan_lvs 0).
  mpath filter initialised.
  Partitioned filter initialised.
  signature filter initialised.
  MD filter initialised.
  Composite filter initialised.
  Persistent filter initialised.
  devices/hints not found in config: defaulting to all
  metadata/record_lvs_history not found in config: defaulting to 0
  DEGRADED MODE. Incomplete RAID LVs will be processed.
  Processing command: pvscan -vvv
  Command pid: 2253
  System ID:
  O_DIRECT will be used
  global/locking_type not found in config: defaulting to 1
  File locking settings: readonly:0 sysinit:0 ignorelockingfailure:0 global/metadata_read_only:0 global/wait_for_locks:1.
  devices/md_component_checks not found in config: defaulting to auto
  Using md_component_checks auto use_full_md_check 0
  report/output_format not found in config: defaulting to basic
  log/report_command_log not found in config: defaulting to 0
  Processing each PV
  Locking /run/lock/lvm/P_global RB
  _do_flock /run/lock/lvm/P_global:aux WB
  _undo_flock /run/lock/lvm/P_global:aux
  _do_flock /run/lock/lvm/P_global RB
  Finding VG info
  Finding devices to scan
  Creating list of system devices.
  Found dev 8:0 /dev/sda - new.
  Found dev 8:0 /dev/disk/by-id/scsi-36c81f660dcee52002bc376f2bab60d05 - new alias.
  Found dev 8:0 /dev/disk/by-id/wwn-0x6c81f660dcee52002bc376f2bab60d05 - new alias.
  Found dev 8:0 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0 - new alias.
  Found dev 8:1 /dev/sda1 - new.
  Found dev 8:1 /dev/disk/by-id/scsi-36c81f660dcee52002bc376f2bab60d05-part1 - new alias.
  Found dev 8:1 /dev/disk/by-id/wwn-0x6c81f660dcee52002bc376f2bab60d05-part1 - new alias.
  Found dev 8:1 /dev/disk/by-partuuid/b693bb94-88a7-4a97-90d7-54a0732d7b1a - new alias.
  Found dev 8:1 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0-part1 - new alias.
  Found dev 8:2 /dev/sda2 - new.
  Found dev 8:2 /dev/disk/by-id/scsi-36c81f660dcee52002bc376f2bab60d05-part2 - new alias.
  Found dev 8:2 /dev/disk/by-id/wwn-0x6c81f660dcee52002bc376f2bab60d05-part2 - new alias.
  Found dev 8:2 /dev/disk/by-partuuid/fd4174d7-2ec0-4a9f-9f5a-674674c7292c - new alias.
  Found dev 8:2 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0-part2 - new alias.
  Found dev 8:2 /dev/disk/by-uuid/6C54-69A2 - new alias.
  Found dev 8:3 /dev/sda3 - new.
  Found dev 8:3 /dev/disk/by-id/lvm-pv-uuid-nXqDi4-IaN9-7KzG-ShKd-Br8b-9Ajx-26boB3 - new alias.
  Found dev 8:3 /dev/disk/by-id/scsi-36c81f660dcee52002bc376f2bab60d05-part3 - new alias.
  Found dev 8:3 /dev/disk/by-id/wwn-0x6c81f660dcee52002bc376f2bab60d05-part3 - new alias.
  Found dev 8:3 /dev/disk/by-partuuid/e72b9b81-bf69-4ae4-b6b9-d45005e1391d - new alias.
  Found dev 8:3 /dev/disk/by-path/pci-0000:03:00.0-scsi-0:2:0:0-part3 - new alias.
  Found dev 11:0 /dev/sr0 - new.
  Found dev 11:0 /dev/cdrom - new alias.
  Found dev 11:0 /dev/disk/by-id/ata-TSSTcorp_DVD-ROM_SU-108BB_R9296GVD30037Z - new alias.
  Found dev 11:0 /dev/disk/by-path/pci-0000:00:1f.2-ata-5 - new alias.
  Found dev 11:0 /dev/disk/by-path/pci-0000:00:1f.2-ata-5.0 - new alias.
  Found dev 11:0 /dev/dvd - new alias.
  Found dev 7:0 /dev/loop0 - new.
  Found dev 7:1 /dev/loop1 - new.
  Found dev 7:2 /dev/loop2 - new.
  Found dev 7:3 /dev/loop3 - new.
  Found dev 7:4 /dev/loop4 - new.
  Found dev 7:5 /dev/loop5 - new.
  Found dev 7:6 /dev/loop6 - new.
  Found dev 7:7 /dev/loop7 - new.
  Found dev 253:0 /dev/dm-0 - new.
  Found dev 253:0 /dev/disk/by-id/dm-name-pve-swap - new alias.
  Found dev 253:0 /dev/disk/by-id/dm-uuid-LVM-J7A0JiQ0nLbPtN8LqqPmdAjTClEoZaIfoUp2ll80XMcE9dTzLhCQEjfA8BGFbSDK - new alias.
  Found dev 253:0 /dev/disk/by-uuid/845f537d-07f7-44b2-8c65-f49505de26a9 - new alias.
  Found dev 253:0 /dev/mapper/pve-swap - new alias.
  Found dev 253:0 /dev/pve/swap - new alias.
  Found dev 253:1 /dev/dm-1 - new.
  Found dev 253:1 /dev/disk/by-id/dm-name-pve-root - new alias.
  Found dev 253:1 /dev/disk/by-id/dm-uuid-LVM-J7A0JiQ0nLbPtN8LqqPmdAjTClEoZaIfuRN7jpMKT0hz4GrB71YVcJS0KgVPp5ib - new alias.
  Found dev 253:1 /dev/disk/by-uuid/9572816c-42e0-4696-8935-3001e077a201 - new alias.
  Found dev 253:1 /dev/mapper/pve-root - new alias.
  Found dev 253:1 /dev/pve/root - new alias.
  Filtering devices to scan (nodata)
  Opened /dev/loop0 RO O_DIRECT
  /dev/loop0: size is 0 sectors
  Closed /dev/loop0
  /dev/loop0: Skipping: Too small to hold a PV
  filter caching bad /dev/loop0
  Opened /dev/sda RO O_DIRECT
  /dev/sda: size is 3904897024 sectors
  Closed /dev/sda
  filter caching good /dev/sda
  /dev/cdrom: Skipping: Unrecognised LVM device type 11
  filter caching bad /dev/cdrom
  dm version   [ opencount flush ]   [16384] (*1)
  dm status   (253:0) [ noopencount noflush ]   [16384] (*1)
  /dev/pve/swap: Skipping unusable device.
  filter caching bad /dev/pve/swap
  Opened /dev/loop1 RO O_DIRECT
  /dev/loop1: size is 0 sectors
  Closed /dev/loop1
  /dev/loop1: Skipping: Too small to hold a PV
  filter caching bad /dev/loop1
  Opened /dev/sda1 RO O_DIRECT
  /dev/sda1: size is 2014 sectors
  Closed /dev/sda1
  /dev/sda1: Skipping: Too small to hold a PV
  filter caching bad /dev/sda1
  dm status   (253:1) [ noopencount noflush ]   [16384] (*1)
  /dev/pve/root: Skipping unusable device.
  filter caching bad /dev/pve/root
  Opened /dev/loop2 RO O_DIRECT
  /dev/loop2: size is 0 sectors
  Closed /dev/loop2
  /dev/loop2: Skipping: Too small to hold a PV
  filter caching bad /dev/loop2
  Opened /dev/sda2 RO O_DIRECT
  /dev/sda2: size is 2097152 sectors
  Closed /dev/sda2
  /dev/sda2: Device is a partition, using primary device sda for mpath component detection
  filter caching good /dev/sda2
  Opened /dev/loop3 RO O_DIRECT
  /dev/loop3: size is 0 sectors
  Closed /dev/loop3
  /dev/loop3: Skipping: Too small to hold a PV
  filter caching bad /dev/loop3
  Opened /dev/sda3 RO O_DIRECT
  /dev/sda3: size is 3902797791 sectors
  Closed /dev/sda3
  /dev/sda3: Device is a partition, using primary device sda for mpath component detection
  filter caching good /dev/sda3
  Opened /dev/loop4 RO O_DIRECT
  /dev/loop4: size is 0 sectors
  Closed /dev/loop4
  /dev/loop4: Skipping: Too small to hold a PV
  filter caching bad /dev/loop4
  Opened /dev/loop5 RO O_DIRECT
  /dev/loop5: size is 0 sectors
  Closed /dev/loop5
  /dev/loop5: Skipping: Too small to hold a PV
  filter caching bad /dev/loop5
  Opened /dev/loop6 RO O_DIRECT
  /dev/loop6: size is 0 sectors
  Closed /dev/loop6
  /dev/loop6: Skipping: Too small to hold a PV
  filter caching bad /dev/loop6
  Opened /dev/loop7 RO O_DIRECT
  /dev/loop7: size is 0 sectors
  Closed /dev/loop7
  /dev/loop7: Skipping: Too small to hold a PV
  filter caching bad /dev/loop7
  Checking fd limit for num_devs 3 want 35 soft 1024 hard 1048576
  Scanning 3 devices for VG info
  open /dev/sda ro di 0 fd 5
  open /dev/sda2 ro di 1 fd 6
  open /dev/sda3 ro di 2 fd 7
  Scanning submitted 3 reads
  Processing data from device /dev/sda 8:0 di 0 block 0x55b7f0341010
  /dev/sda: using cached size 3904897024 sectors
  /dev/sda: Skipping: Partition table signature found
  filter caching bad /dev/sda
  Processing data from device /dev/sda2 8:2 di 1 block 0x55b7f0341050
  /dev/sda2: using cached size 2097152 sectors
  /dev/sda2: Device is a partition, using primary device sda for mpath component detection
  /dev/sda2: using cached size 2097152 sectors
  filter caching good /dev/sda2
  /dev/sda2: No lvm label detected
  Processing data from device /dev/sda3 8:3 di 2 block 0x55b7f0341090
  /dev/sda3: using cached size 3902797791 sectors
  /dev/sda3: Device is a partition, using primary device sda for mpath component detection
  /dev/sda3: using cached size 3902797791 sectors
  filter caching good /dev/sda3
  /dev/sda3: lvm2 label detected at sector 1
  lvmcache /dev/sda3: now in VG #orphans_lvm2 #orpha-ns_l-vm2
  /dev/sda3: PV header extension version 2 found
  Scanning /dev/sda3 mda1 summary.
  Reading mda header sector from /dev/sda3 at 4096
  Reading metadata summary from /dev/sda3 at 673792 size 7731 (+0)
  Found metadata summary on /dev/sda3 at 673792 size 7731 for VG pve
  lvmcache adding vginfo for pve J7A0Ji-Q0nL-bPtN-8Lqq-PmdA-jTCl-EoZaIf
  lvmcache /dev/sda3: now in VG pve J7A0Ji-Q0nL-bPtN-8Lqq-PmdA-jTCl-EoZaIf
  lvmcache /dev/sda3: VG pve: set VGID to J7A0JiQ0nLbPtN8LqqPmdAjTClEoZaIf.
  lvmcache /dev/sda3 mda1 VG pve set seqno 276 checksum e8ce5439 mda_size 7731
  lvmcache /dev/sda3: VG pve: set creation host to pve.
  Scanned /dev/sda3 mda1 seqno 276
  Scanned devices: read errors 0 process errors 0 failed 0
  Found VG info for 1 VGs
  Getting list of all devices from system
  /dev/loop0: using cached size 0 sectors
  /dev/loop0: Skipping: Too small to hold a PV
  filter caching bad /dev/loop0
  /dev/sda: filter cache skipping (cached bad)
  /dev/cdrom: Skipping: Unrecognised LVM device type 11
  filter caching bad /dev/cdrom
  dm status   (253:0) [ noopencount noflush ]   [16384] (*1)
  /dev/pve/swap: Skipping unusable device.
  filter caching bad /dev/pve/swap
  /dev/loop1: using cached size 0 sectors
  /dev/loop1: Skipping: Too small to hold a PV
  filter caching bad /dev/loop1
  /dev/sda1: using cached size 2014 sectors
  /dev/sda1: Skipping: Too small to hold a PV
  filter caching bad /dev/sda1
  dm status   (253:1) [ noopencount noflush ]   [16384] (*1)
  /dev/pve/root: Skipping unusable device.
  filter caching bad /dev/pve/root
  /dev/loop2: using cached size 0 sectors
  /dev/loop2: Skipping: Too small to hold a PV
  filter caching bad /dev/loop2
  /dev/sda2: filter cache using (cached good)
  /dev/loop3: using cached size 0 sectors
  /dev/loop3: Skipping: Too small to hold a PV
  filter caching bad /dev/loop3
  /dev/sda3: filter cache using (cached good)
  /dev/loop4: using cached size 0 sectors
  /dev/loop4: Skipping: Too small to hold a PV
  filter caching bad /dev/loop4
  /dev/loop5: using cached size 0 sectors
  /dev/loop5: Skipping: Too small to hold a PV
  filter caching bad /dev/loop5
  /dev/loop6: using cached size 0 sectors
  /dev/loop6: Skipping: Too small to hold a PV
  filter caching bad /dev/loop6
  /dev/loop7: using cached size 0 sectors
  /dev/loop7: Skipping: Too small to hold a PV
  filter caching bad /dev/loop7
  Processing PVs in VG pve
  Locking /run/lock/lvm/V_pve RB
  _do_flock /run/lock/lvm/V_pve:aux WB
  _undo_flock /run/lock/lvm/V_pve:aux
  _do_flock /run/lock/lvm/V_pve RB
  Reading VG pve J7A0JiQ0nLbPtN8LqqPmdAjTClEoZaIf
  Reading mda header sector from /dev/sda3 at 4096
  rescan skipped - unchanged offset 669696 checksum e8ce5439
  Reading VG pve metadata from /dev/sda3 4096
  Reading mda header sector from /dev/sda3 at 4096
  VG pve metadata check /dev/sda3 mda 4096 slot0 offset 669696 size 7731
  Reading metadata from /dev/sda3 at 673792 size 7731 (+0)
  Allocated VG pve at 0x55b7f036b370.
  Importing logical volume pve/swap.
  Importing logical volume pve/root.
  Importing logical volume pve/data.
  Importing logical volume pve/vm-108-disk-0.
  Importing logical volume pve/vm-109-disk-0.
  Importing logical volume pve/vm-118-disk-0.
  Importing logical volume pve/vm-119-disk-0.
  Importing logical volume pve/vm-120-disk-0.
  Importing logical volume pve/vm-120-disk-1.
  Importing logical volume pve/vm-120-disk-2.
  Importing logical volume pve/vm-102-disk-0.
  Importing logical volume pve/vm-105-disk-0.
  Importing logical volume pve/vm-100-disk-0.
  Importing logical volume pve/vm-100-disk-1.
  Importing logical volume pve/vm-100-disk-2.
  Importing logical volume pve/vm-103-disk-0.
  Importing logical volume pve/vm-103-disk-1.
  Importing logical volume pve/vm-103-disk-2.
  Importing logical volume pve/vm-103-disk-3.
  Importing logical volume pve/vm-106-disk-0.
  Importing logical volume pve/data_tdata.
  Importing logical volume pve/data_tmeta.
  Importing logical volume pve/lvol0_pmspare.
  Logical volume pve/lvol0_pmspare is pool metadata spare.
  Stack pve/data:0[0] on LV pve/data_tdata:0.
  Adding pve/data:0 as an user of pve/data_tdata.
  Adding pve/data:0 as an user of pve/data_tmeta.
  Added delete message.
  Added delete message.
  Adding pve/vm-108-disk-0:0 as an user of pve/data.
  Adding pve/vm-109-disk-0:0 as an user of pve/data.
  Adding pve/vm-118-disk-0:0 as an user of pve/data.
  Adding pve/vm-119-disk-0:0 as an user of pve/data.
  Adding pve/vm-120-disk-0:0 as an user of pve/data.
  Adding pve/vm-120-disk-1:0 as an user of pve/data.
  Adding pve/vm-120-disk-2:0 as an user of pve/data.
  Adding pve/vm-102-disk-0:0 as an user of pve/data.
  Adding pve/vm-105-disk-0:0 as an user of pve/data.
  Adding pve/vm-100-disk-0:0 as an user of pve/data.
  Adding pve/vm-100-disk-1:0 as an user of pve/data.
  Adding pve/vm-100-disk-2:0 as an user of pve/data.
  Adding pve/vm-103-disk-0:0 as an user of pve/data.
  Adding pve/vm-103-disk-1:0 as an user of pve/data.
  Adding pve/vm-103-disk-2:0 as an user of pve/data.
  Adding pve/vm-103-disk-3:0 as an user of pve/data.
  Adding pve/vm-106-disk-0:0 as an user of pve/data.
  Found metadata on /dev/sda3 at 673792 size 7731 for VG pve
  lvmcache_update_vg pve for info /dev/sda3
  metadata/lvs_history_retention_time not found in config: defaulting to 0
  /dev/sda3 0:      0   2048: swap(0:0)
  /dev/sda3 1:   2048  24576: root(0:0)
  /dev/sda3 2:  26624 437502: data_tdata(0:0)
  /dev/sda3 3: 464126   4048: data_tmeta(0:0)
  /dev/sda3 4: 468174   4048: lvol0_pmspare(0:0)
  /dev/sda3 5: 472222   4193: NULL(0:0)
  /dev/sda3: using cached size 3902797791 sectors
  Processing PV /dev/sda3 in VG pve.
  PV /dev/sda3   VG pve             lvm2 [<1.82 TiB / <16.38 GiB free]
  Unlock: Memlock counters: prioritized:0 locked:0 critical:0 daemon:0 suspended:0
  Syncing device names
  Unlocking /run/lock/lvm/V_pve
  _undo_flock /run/lock/lvm/V_pve
  Freeing VG pve at 0x55b7f036b370.
  Processing PVs in VG #orphans_lvm2
  Reading orphan VG #orphans_lvm2.
  Total: 1 [<1.82 TiB] / in use: 1 [<1.82 TiB] / in no VG: 0 [0   ]
  Unlocking /run/lock/lvm/P_global
  _undo_flock /run/lock/lvm/P_global
  Destroy lvmcache content
  Completed: pvscan -vvv
root@pve:~#

Any idea to fix this issue? Thanks
 
Code:
root@pve:~# lvscan
  ACTIVE            '/dev/pve/swap' [4.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [96.00 GiB] inherit
  ACTIVE            '/dev/pve/data' [<1.67 TiB] inherit
  inactive          '/dev/pve/vm-108-disk-0' [100.00 GiB] inherit
  inactive          '/dev/pve/vm-109-disk-0' [300.00 GiB] inherit
  inactive          '/dev/pve/vm-118-disk-0' [40.00 GiB] inherit
  inactive          '/dev/pve/vm-119-disk-0' [50.00 GiB] inherit
  inactive          '/dev/pve/vm-120-disk-0' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-120-disk-1' [250.00 GiB] inherit
  inactive          '/dev/pve/vm-120-disk-2' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-102-disk-0' [100.00 GiB] inherit
  inactive          '/dev/pve/vm-105-disk-0' [100.00 GiB] inherit
  inactive          '/dev/pve/vm-100-disk-0' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-100-disk-1' [250.00 GiB] inherit
  inactive          '/dev/pve/vm-100-disk-2' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-103-disk-0' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-103-disk-1' [250.00 GiB] inherit
  inactive          '/dev/pve/vm-103-disk-2' [4.00 MiB] inherit
  inactive          '/dev/pve/vm-103-disk-3' [50.00 GiB] inherit
  inactive          '/dev/pve/vm-106-disk-0' [100.00 GiB] inherit

Code:
root@pve:~# lvchange -ay /dev/pve/vm-108-disk-0
  device-mapper: reload ioctl on  (253:5) failed: No data available
root@pve:~#

Code:
root@pve:~# lsblk
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                  8:0    0  1.8T  0 disk
├─sda1               8:1    0 1007K  0 part
├─sda2               8:2    0    1G  0 part /boot/efi
└─sda3               8:3    0  1.8T  0 part
  ├─pve-swap       253:0    0    4G  0 lvm 
  ├─pve-root       253:1    0   96G  0 lvm  /
  ├─pve-data_tmeta 253:2    0 15.8G  0 lvm 
  │ └─pve-data     253:4    0  1.7T  0 lvm 
  └─pve-data_tdata 253:3    0  1.7T  0 lvm 
    └─pve-data     253:4    0  1.7T  0 lvm 
sdb                  8:16   1  7.4G  0 disk
├─sdb1               8:17   1  1.4G  0 part
└─sdb2               8:18   1  4.6M  0 part
sr0                 11:0    1 1024M  0 rom 
root@pve:~#
 
Last edited:
[CODE
]root@pve:~# lvs -o name,metadata_percent,data_percent,chunk_size,size --all
LV Meta% Data% Chunk LSize
data 0.15 0.00 64.00k <1.67t
[data_tdata] 0 <1.67t
[data_tmeta] 0 15.81g
[lvol2_pmspare] 0 15.81g
root 0 96.00g
swap 0 4.00g
vm-100-disk-0 0 4.00m
vm-100-disk-1 0 250.00g
vm-100-disk-2 0 4.00m
vm-102-disk-0 0 100.00g
vm-103-disk-0 0 4.00m
vm-103-disk-1 0 250.00g
vm-103-disk-2 0 4.00m
vm-103-disk-3 0 50.00g
vm-105-disk-0 0 100.00g
vm-106-disk-0 0 100.00g
vm-108-disk-0 0 100.00g
vm-109-disk-0 0 300.00g
vm-118-disk-0 0 40.00g
vm-119-disk-0 0 50.00g
vm-120-disk-0 0 4.00m
vm-120-disk-1 0 250.00g
vm-120-disk-2 0 4.00m
root@pve:~#[/CODE]

Look iike the VM Images still there, but why not connect/mapping correctly?

# lvchange -ay /dev/pve/vm-108-disk-0
device-mapper: reload ioctl on (253:5) failed: No data available
 
It would appear that your metadata has been corrupted.
You could try the vgcfgrestore command, I have no experience with it, so do your own homework. Man vgcfgrestore.

You should also consider why this has happened. It is a rare event. A simple reboot should not have caused this. What condition is the disk in?
 
Can you show next output commands:

Code:
$ lvs
$ vgs
$ pvs

Here are the results:

Code:
root@pve:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-a-tz--  <1.67t             0.00   0.15                          
  root          pve -wi-ao----  96.00g                                                  
  swap          pve -wi-a-----   4.00g                                                  
  vm-100-disk-0 pve Vwi---tz--   4.00m data                                              
  vm-100-disk-1 pve Vwi---tz-- 250.00g data                                              
  vm-100-disk-2 pve Vwi---tz--   4.00m data                                              
  vm-102-disk-0 pve Vwi---tz-- 100.00g data                                              
  vm-103-disk-0 pve Vwi---tz--   4.00m data                                              
  vm-103-disk-1 pve Vwi---tz-- 250.00g data                                              
  vm-103-disk-2 pve Vwi---tz--   4.00m data                                              
  vm-103-disk-3 pve Vwi---tz--  50.00g data                                              
  vm-105-disk-0 pve Vwi---tz-- 100.00g data                                              
  vm-106-disk-0 pve Vwi---tz-- 100.00g data                                              
  vm-108-disk-0 pve Vwi---tz-- 100.00g data                                              
  vm-109-disk-0 pve Vwi---tz-- 300.00g data                                              
  vm-118-disk-0 pve Vwi---tz--  40.00g data                                              
  vm-119-disk-0 pve Vwi---tz--  50.00g data                                              
  vm-120-disk-0 pve Vwi---tz--   4.00m data                                              
  vm-120-disk-1 pve Vwi---tz-- 250.00g data                                              
  vm-120-disk-2 pve Vwi---tz--   4.00m data                                              
root@pve:~#

Code:
root@pve:~# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   1  20   0 wz--n- <1.82t <20.38g
root@pve:~#
root@pve:~# pvs
  PV         VG  Fmt  Attr PSize  PFree
  /dev/sda3  pve lvm2 a--  <1.82t <20.38g
root@pve:~#
 
lvs command, show's your data pool.
From Proxmox Gui, could u check Datacenter > Storage section and take a look to local-lvm storage, exist'?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!