[SOLVED] LVM Storage Accidentally Deleted

Aug 30, 2022
12
1
3
Hello Everyone,

We made a mistake and without going into details, our two large storages from the cluster primary server were deleted. However, through some miracle, and because all of the virtual machines / containers were running at the time no data has been lost (yet). Everything remains running. Command line status shows that my volumes remain intact and listing the mount points shows everything is there.

Please please tell me there is a command that will re-read / re-import things, possibly the metadata, and put things back the way they were?

Bash:
# mount | grep lv
/dev/mapper/vg_content-lv_content on /content type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/vg_vmboot-lv_vmboot on /vmboot type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)

Bash:
# pvesm scan lvm
pve
vg_content
vg_vmboot

Bash:
  --- Volume group ---
  VG Name               vg_content
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               10.91 TiB
  PE Size               4.00 MiB
  Total PE              2861022
  Alloc PE / Size       0 / 0   
  Free  PE / Size       2861022 / 10.91 TiB
  VG UUID               Uty188-trmx-1bxY-Dr7s-2Dek-QBw3-ydVYSN
  
  --- Volume group ---
  VG Name               vg_vmboot
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               931.51 GiB
  PE Size               4.00 MiB
  Total PE              238467
  Alloc PE / Size       0 / 0   
  Free  PE / Size       238467 / 931.51 GiB
  VG UUID               Ir9kpb-hcWD-rqUF-9GHd-Rhe4-Kr7C-OjLwT7
 
We made a mistake and without going into details, our two large storages from the cluster primary server were deleted.
How/where did you delete them?

else - to get at least some kind of picture - please post:
* /etc/pve/storage.cfg
* /etc/pve/qemu-server/<vmid>.conf (for one of the VMs which has/had its disks on those storages)
* /etc/fstab
* outputs of:
** lsblk
** lvs
** pvs
** vgs
** zpool status
** zfs list
** mount
 
Thanks Stoiko for the reply...

Note, we were able to get our storages activated again, but they show 100% full in the GUI and none of the content is listed anymore.

Here is my storage.cfg:

Code:
dir: local
        path /var/lib/vz
        content vztmpl,iso,backup

lvm: VMboot
        vgname vg_vmboot
        content images

lvm: content
        vgname vg_content
        content images

Here are my mounts:

Code:
# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=65871992k,nr_inodes=16467998,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=13181416k,mode=755,inode64)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=19254)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
/dev/sdb2 on /boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro)
/dev/mapper/vg_vmboot-VMboot on /vmboot type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/mapper/vg_content-lv_content on /content type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=13181412k,nr_inodes=3295353,mode=700,inode64)

LVS:

Code:
  LV         VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root       pve        -wi-ao----  96.00g                                                  
  swap       pve        -wi-ao----   8.00g                                                  
  lv_content vg_content -wi-ao----  10.91t                                                  
  VMboot     vg_vmboot  -wi-ao---- 931.51g

PVS:

Code:
  PV           VG         Fmt  Attr PSize    PFree  
  /dev/nvme0n1 vg_vmboot  lvm2 a--   931.51g       0
  /dev/sda     vg_content lvm2 a--    10.91t       0
  /dev/sdb3    pve        lvm2 a--  <446.63g <342.63g

VGS:

Code:
# vgs
  VG         #PV #LV #SN Attr   VSize    VFree  
  pve          1   2   0 wz--n- <446.63g <342.63g
  vg_content   1   1   0 wz--n-   10.91t       0
  vg_vmboot    1   1   0 wz--n-  931.51g       0

VM Config:

Code:
# cat /etc/pve/qemu-server/101.conf
boot: order=scsi0
memory: 2048
meta: creation-qemu=6.2.0,ctime=1660873235
name: pandaops
net0: virtio=CE:5A:2E:D8:1A:DE,bridge=vmbr1
onboot: 1
ostype: l26
scsi0: content:101/vm-101-disk-0.raw,size=40G
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=fd420880-4dc3-4514-aaf7-a990edc5bcc5
vmgenid: 95cdaaf7-23c1-455a-a0fa-269b3337a5ef
 
Last edited:
Note, we were able to get our storages activated again, but they show 100% full in the GUI and none of the content is listed anymore.
again - how did you "delete" them in the first place - without this information all I do here is guesswork and could result in dataloss

In any case create backups of your disks and the data on top of them and make sure those backups work!!

that being said - I think the issue here is that you did create a LVM on top of /dev/nvme0n1 and /dev/sda, but then did just create on single LV on those and created a XFS filesystem on them.

So for PVE these are not LVM (or lvm-thin) storages, but directory storages....

/dev/mapper/vg_vmboot-VMboot on /vmboot type xfs
/dev/mapper/vg_content-lv_content on /content type xfs

I think that the following might work in /etc/pve/storage.cfg
Code:
dir: content
    path /content
    content iso,vztmpl,images,backup,rootdir

dir: vmboot
    path /vmboot
    content iso,vztmpl,images,backup,rootdir

I hope this helps!
 
We deleted them with the pvesm tool on the cli shell.
ok - that probably means my guess was correct - the data is still there - you just deleted the storage configuration...

if you have a backup of your /etc/pve - check and compare the contents of /etc/pve/storage.cfg ...
If not - try my suggestion from above (after creating a backup!!) - and plan to add a backup of the config-files of your PVE nodes

I hope this helps!
 
  • Like
Reactions: leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!