My LVM-Thin gone (?) - vdisks missing/empty (lvscan inactive)

jaceqp

Well-Known Member
May 28, 2018
98
7
48
44
Hi there.
I recently added new pair of server disks (hardware raid1). I've also created new LVM_Thin and moved boot-vd of my VM from my old LVM.
It worked like a charm for a few days. Just now I saw my VM in a boot-loop mode - "no bootable device".

In proxmox gui everything seemed to look just fine. After a PM reboot vdisks are listed on a new lvm-thin yet lvm usage is 0%! :/

My lvscan shows:
Code:
***@ip:~# lvscan
  inactive          '/dev/raid1_2x1200GB/daneVM' [1.09 TiB] inherit
  inactive          '/dev/raid1_2x1200GB/vm-100-disk-1' [150.00 GiB] inherit
  inactive          '/dev/raid1_2x1200GB/vm-100-disk-2' [250.00 GiB] inherit
  ACTIVE            '/dev/pve/swap' [16.00 GiB] inherit
  ACTIVE            '/dev/pve/root' [20.00 GiB] inherit
  ACTIVE            '/dev/pve/data' [496.03 GiB] inherit
  ACTIVE            '/dev/pve/vm-100-disk-2' [400.00 GiB] inherit

lsblk:
Code:
***@ip:~# lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                            8:0    0 558.4G  0 disk
├─sda1                         8:1    0     1M  0 part
├─sda2                         8:2    0   256M  0 part /boot/efi
└─sda3                         8:3    0 558.2G  0 part
  ├─pve-swap                 253:0    0    16G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    20G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   5.1G  0 lvm
  │ └─pve-data-tpool         253:4    0   496G  0 lvm
  │   ├─pve-data             253:5    0   496G  0 lvm
  │   └─pve-vm--100--disk--2 253:6    0   400G  0 lvm
  └─pve-data_tdata           253:3    0   496G  0 lvm
    └─pve-data-tpool         253:4    0   496G  0 lvm
      ├─pve-data             253:5    0   496G  0 lvm
      └─pve-vm--100--disk--2 253:6    0   400G  0 lvm
sdb                            8:16   0   1.1T  0 disk
└─sdb1                         8:17   0   1.1T  0 part
sr0                           11:0    1  1024M  0 rom

/etc/pve/storage.cfg
Code:
dir: local
        path /var/lib/vz
        content vztmpl,backup,iso


lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images


dir: backup
        path /mnt/USB
        content vztmpl,backup,iso
        maxfiles 5
        shared 0


cifs: WDcloud
        path /mnt/pve/WDcloud
        server 10.20.30.244
        share PROXMOX
        content backup,iso,images
        maxfiles 3
        username backuppm


lvmthin: daneVM2
        thinpool daneVM
        vgname raid1_2x1200GB
        content images,rootdir

Code:
****@ip:~# df -h
Filesystem              Size  Used Avail Use% Mounted on
udev                    7.7G     0  7.7G   0% /dev
tmpfs                   1.6G  8.9M  1.6G   1% /run
/dev/mapper/pve-root     20G  8.2G   11G  44% /
tmpfs                   7.8G   43M  7.7G   1% /dev/shm
tmpfs                   5.0M  4.0K  5.0M   1% /run/lock
tmpfs                   7.8G     0  7.8G   0% /sys/fs/cgroup
/dev/sda2               253M  288K  252M   1% /boot/efi
/dev/fuse                30M   20K   30M   1% /etc/pve
//*.*.*.*/PROXMOX  1.8T  781G  1.1T  43% /mnt/pve/WDcloud
tmpfs                   1.6G     0  1.6G   0% /run/user/0

VM's start fail log:

Code:
Virtual Environment 5.2-1
Search
Node 'ip'
Server View
Logs
()
kvm: -drive file=/dev/raid1_2x1200GB/vm-100-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on: Could not open '/dev/raid1_2x1200GB/vm-100-disk-1': No such file or directory
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -name Win2016 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=667db880-be40-45e5-8ed0-5107a99962bb' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vga std -vnc unix:/var/run/qemu-server/100.vnc,x509,password -no-hpet -cpu 'kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,enforce' -m 14336 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -chardev 'socket,path=/var/run/qemu-server/100.qga,server,nowait,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:33eb8940d3c1' -drive 'file=/var/lib/vz/template/iso/virtio-win-0.1.141.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/dev/raid1_2x1200GB/vm-100-disk-1,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -drive 'file=/dev/raid1_2x1200GB/vm-100-disk-2,if=none,id=drive-scsi1,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi1,id=scsi1' -drive 'file=/mnt/pve/WDcloud/images/100/vm-100-disk-1.raw,if=none,id=drive-scsi2,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=2,drive=drive-scsi2,id=scsi2' -drive 'file=/dev/pve/vm-100-disk-2,if=none,id=drive-virtio1,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'virtio-blk-pci,drive=drive-virtio1,id=virtio1,bus=pci.0,addr=0xb' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=0E:72:29:84:B4:D6,netdev=net0,bus=pci.0,addr=0x12,id=net0' -rtc 'driftfix=slew,base=localtime' -global 'kvm-pit.lost_tick_policy=discard'' failed: exit code 1


VM conf file:


Code:
/etc/pve/nodes/ip/qemu-server/100.conf                                                                                               
agent: 1
balloon: 8132
boot: cd
bootdisk: scsi0
cores: 4
ide2: local:iso/virtio-win-0.1.141.iso,media=cdrom,size=309208K
memory: 14336
name: Win2016
net0: virtio=0E:72:29:84:B4:D6,bridge=vmbr1
numa: 0
ostype: win10
scsi0: daneVM2:vm-100-disk-1,size=150G
scsi1: daneVM2:vm-100-disk-2,backup=0,size=250G
scsi2: WDcloud:100/vm-100-disk-1.raw,backup=0,size=100G
scsihw: virtio-scsi-pci
smbios1: uuid=667db880-be40-45e5-8ed0-5107a99962bb
sockets: 1
virtio1: local-lvm:vm-100-disk-2,backup=0,size=400G

Raid controller shows no errors on all disks and raid volumes.
Help me out on this cause I litterally have no idea wtf :/ My main concern is to recover that 150GB vdisk.

PS. VM os = windows server 2016. I launched VM with OS install ISO and on a disk selection tab I can see 'missing' disks yet with "OFFLINE" status. Any ideas?
 
Last edited:
Code:
*** @ip:~# vgchange -a y raid1_2x1200GB
  Check of pool raid1_2x1200GB/daneVM failed (status:1). Manual repair required!
  0 logical volume(s) in volume group "raid1_2x1200GB" now active

Have no idea what happened. Last task I remember was to create 250GB VD on it and copying some stuff from old LV 400GB VD to this one (inside a vm). Yet 250GB VD was created few days earlier. Everything was up'n'running nicely.

Also one thing to mention is that while creating LV on new raid volume I used full capacity with:


Code:
***@ip:~# lvcreate --type thin-pool -L 100G -n daneVM raid1_2x1200GB
***@ip:~# lvextend -l +100%FREE raid1_2x1200GB/daneVM
 
Last edited:
One more thing... Will that be any of use?

Code:
****@ip:~# mc

  Left     File     Command     Options     Right
┌<─ /etc/lvm/archive ──────────────────────.[^]>┐┌<─ ~ ──────────────────────────────────────.[^]>┐
│.n         Name           │ Size  │Modify time ││.n          Name           │ Size  │Modify time │
│/..                       │UP--DIR│Jun 14  2018││/..                        │UP--DIR│Mar 15 16:13│
│ pve_00017-1328206897.vg  │   7217│Feb  8 09:29││/.cache                    │   4096│Jun 14  2018│
│ pve_00018-1647792682.vg  │   6761│Feb  8 09:35││/.config                   │   4096│Jun 14  2018│
│ pve_00019-542025813.vg   │   6371│Mar  8 09:17││/.local                    │   4096│Jun 14  2018│
│ pve_00020-1236772161.vg  │   5974│Mar  8 09:17││/.ssh                      │   4096│Jun 14  2018│
│ pve_00021-1836812745.vg  │   5531│Mar  8 09:17││ .bash_history             │   7959│Mar 20 20:37│
│ pve_00022-1602440127.vg  │   5087│Mar 15 08:15││ .bashrc                   │    570│Jan 31  2010│
│ pve_00023-1884729894.vg  │   4644│Mar 15 08:15││ .forward                  │     25│Jun 14  2018│
│ pve_00024-1932293855.vg  │   4189│Mar 15 22:37││ .profile                  │    148│Aug 17  2015│
│ pve_00025-1980636159.vg  │   3832│Mar 19 16:45││ .rnd                      │   1024│Jun 14  2018│
│ pve_00026-1616714713.vg  │   4188│Mar 19 17:00││ .selected_editor          │     72│Jun 14  2018│
│ raid1_2x1200~181716221.vg│    891│Mar 15 18:12││ CmdTool.log               │    148│Mar 15 18:18│
│ raid1_2x1200~258481920.vg│    916│Mar 15 18:13││ MegaSAS.log               │  32199│Mar 20 17:30│
│ raid1_2x1200~127414013.vg│   2584│Mar 15 18:14││ klabel type: gpt          │  61857│Mar 21 07:13│
│ raid1_2x1200~015542986.vg│   2816│Mar 15 21:45││                           │       │            │
│ raid1_2x1200~787718695.vg│   3213│Mar 19 16:41││                           │       │            │
├───────────────────────────────────────────────┤├────────────────────────────────────────────────┤
│ raid1_2x1200GB_00004-787718695.vg             ││UP--DIR                                         │
└─────────────────────────────── 11G/20G (58%) ─┘└──────────────────────────────── 11G/20G (58%) ─┘

Code:
***@ip:~# vgcfgrestore --list raid1_2x1200GB

  File:         /etc/lvm/archive/raid1_2x1200GB_00000-1181716221.vg
  VG name:      raid1_2x1200GB
  Description:  Created *before* executing 'vgcreate raid1_2x1200GB /dev/sdb1'
  Backup Time:  Mon Mar 15 18:12:16 2021


  File:         /etc/lvm/archive/raid1_2x1200GB_00001-1258481920.vg
  VG name:      raid1_2x1200GB
  Description:  Created *before* executing 'lvcreate --type thin-pool -L 100G -n daneVM raid1_2x1200GB'
  Backup Time:  Mon Mar 15 18:13:29 2021


  File:         /etc/lvm/archive/raid1_2x1200GB_00002-2127414013.vg
  VG name:      raid1_2x1200GB
  Description:  Created *before* executing 'lvextend -l +100%FREE raid1_2x1200GB/daneVM'
  Backup Time:  Mon Mar 15 18:14:28 2021


  File:         /etc/lvm/archive/raid1_2x1200GB_00003-2015542986.vg
  VG name:      raid1_2x1200GB
  Description:  Created *before* executing '/sbin/lvcreate -aly -V 157286400k --name vm-100-disk-1
                                                      --thinpool raid1_2x1200GB/daneVM'
  Backup Time:  Mon Mar 15 21:45:40 2021


  File:         /etc/lvm/archive/raid1_2x1200GB_00004-787718695.vg
  VG name:      raid1_2x1200GB
  Description:  Created *before* executing '/sbin/lvcreate -aly -V 262144000k --name vm-100-disk-2
                                                        --thinpool raid1_2x1200GB/daneVM'
  Backup Time:  Fri Mar 19 16:41:25 2021


  File:         /etc/lvm/backup/raid1_2x1200GB
  VG name:      raid1_2x1200GB
  Description:  Created *after* executing '/sbin/lvcreate -aly -V 262144000k --name vm-100-disk-2 - 
                                                     -thinpool raid1_2x1200GB/daneVM'
  Backup Time:  Fri Mar 19 16:41:25 2021
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!