After force-shutdown pve-data not active

watcherkb

New Member
Aug 23, 2020
7
0
1
43
I have often had the problem that I cannot start vms after restarting the node.
What helped me so far:

Code:
lvchange -an /dev/pve/data
lvconvert --repair /dev/pve/data
lvchange -ay /dev/pve/data

But now it seems I have not enough free space:
Code:
Volume group "pve" has insufficient free space (559 extents): 884 required.

Code:
root@nuc:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 465.8G  0 disk
├─sda1               8:1    0  1007K  0 part
├─sda2               8:2    0   512M  0 part
└─sda3               8:3    0 465.3G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0    96G  0 lvm  /
  ├─pve-data_meta0 253:2    0   3.5G  0 lvm
  ├─pve-data_meta1 253:3    0   3.5G  0 lvm
  ├─pve-data_meta2 253:4    0   3.5G  0 lvm
  ├─pve-data_meta3 253:5    0   3.5G  0 lvm
  └─pve-data_meta4 253:6    0   3.5G  0 lvm

Code:
root@nuc:~# pvs
  PV             VG   Fmt  Attr PSize    PFree
  /dev/sda3      pve  lvm2 a--  <465.26g  2.18g
root@nuc:~# vgs
  VG   #PV #LV #SN Attr   VSize    VFree
  pve    1  27   0 wz--n- <465.26g  2.18g
root@nuc:~# lvs
  LV                                        VG  Attr       LSize    Pool Origin                                   Data%  Meta%  Move Log Cpy%Syn                            c Convert
  data                                      pve twi---tz-- <338.36g                                                                                                        
  data_meta0                                pve -wi-a-----    3.45g                                                                                                        
  data_meta1                                pve -wi-a-----    3.45g                                                                                                        
  data_meta2                                pve -wi-a-----    3.45g                                                                                                        
  data_meta3                                pve -wi-a-----    3.45g                                                                                                        
  data_meta4                                pve -wi-a-----    3.45g                                                                                                        
  root                                      pve -wi-ao----   96.00g                                                                                                        
  snap_vm-100-disk-0_Snap20191014           pve Vri---tz-k    4.00g data vm-100-disk-0                                                                                    
  snap_vm-101-disk-0_Snap20191106           pve Vri---tz-k   10.00g data vm-101-disk-0                                                                                    
  snap_vm-102-disk-0_Snap20191031           pve Vri---tz-k    2.00g data vm-102-disk-0                                                                                    
  snap_vm-110-disk-0_Vor_Alarmanlagenscript pve Vri---tz-k   32.00g data                                                                                                  
  snap_vm-110-disk-0_Vor_Javascript_Update  pve Vri---tz-k   32.00g data                                                                                                  
  swap                                      pve -wi-ao----    8.00g                                                                                                        
  vm-100-disk-0                             pve Vwi---tz--    4.00g data                                                                                                  
  vm-101-disk-0                             pve Vwi---tz--   10.00g data                                                                                                  
  vm-102-disk-0                             pve Vwi---tz--    2.00g data                                                                                                  
  vm-103-disk-0                             pve Vwi---tz--    8.00g data                                                                                                  
  vm-105-disk-0                             pve Vwi---tz--    5.00g data                                                                                                  
  vm-106-disk-0                             pve Vwi---tz--    8.00g data                                                                                                  
  vm-107-disk-0                             pve Vwi---tz--    8.00g data                                                                                                  
  vm-108-disk-0                             pve Vwi---tz--    8.00g data                                                                                                  
  vm-108-disk-1                             pve Vwi---tz--    8.00g data                                                                                                  
  vm-110-disk-0                             pve Vwi---tz--   32.00g data snap_vm-110-disk-0_Vor_Javascript_Update                                                          
  vm-110-state-Vor_Alarmanlagenscript       pve Vwi---tz--    8.39g data                                                                                                  
  vm-110-state-Vor_Javascript_Update        pve Vwi---tz--    8.39g data                                                                                                  
  vm-111-disk-0                             pve Vwi---tz--   35.00g data                                                                                                  
  vm-200-disk-0                             pve Vwi---tz--   32.00g data

I hope I can fin a solution to get more space to repair it.
 
Please post the output of pveversion -v as well as the journal since the boot (journalctl -b).
 
Please post the output of pveversion -v as well as the journal since the boot (journalctl -b).

Thank you mira. Here are the outputs (have a look in the attachment too):

Code:
root@nuc:~# pveversion -v
proxmox-ve: 6.2-1 (running kernel: 5.4.55-1-pve)
pve-manager: 6.2-11 (running version: 6.2-11/22fb4983)
pve-kernel-5.4: 6.2-5
pve-kernel-helper: 6.2-5
pve-kernel-5.0: 6.0-11
pve-kernel-5.4.55-1-pve: 5.4.55-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-2-pve: 5.0.21-7
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.4
libpve-access-control: 6.1-2
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.1-5
libpve-guest-common-perl: 3.1-2
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-6
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.2-10
pve-cluster: 6.1-8
pve-container: 3.1-12
pve-docs: 6.2-5
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-2
pve-firmware: 3.1-2
pve-ha-manager: 3.0-9
pve-i18n: 2.1-3
pve-qemu-kvm: 5.0.0-12
pve-xtermjs: 4.7.0-1
qemu-server: 6.2-11
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve1
 

Attachments

Thank you for the journal.
Can you delete snapshots to see if it frees enough space for you to repair it?
 
@mira it's one of the things I tried, but without success:
Code:
Check of pool pve/data failed (status:1). Manual repair required!
TASK ERROR: lvremove 'pve/vm-110-state-Vor_Javascript_Update' error:   Failed to update pool pve/data.
 
Another user had a similar issue few months ago: https://forum.proxmox.com/threads/not-enough-space-on-thin-lvm.69333/

Looks like what they did was remove the leftover metadata. This can lead to data loss. For some more info take a look at the lvmthin manpage (man lvmthin):
Code:
Repair performs the following steps:

       1. Creates a new, repaired copy of the metadata.
       lvconvert runs the thin_repair command to read damaged metadata from the existing pool metadata LV, and writes a new repaired copy to the VG's pmspare LV.

       2. Replaces the thin pool metadata LV.
       If step 1 is successful, the thin pool metadata LV is replaced with the pmspare LV containing the corrected metadata.  The previous thin pool metadata LV, containing the damaged metadata, becomes visi‐
       ble with the new name ThinPoolLV_tmetaN (where N is 0,1,...).

       If  the  repair works, the thin pool LV and its thin LVs can be activated, and the LV containing the damaged thin pool metadata can be removed.  It may be useful to move the new metadata LV (previously
       pmspare) to a better PV.

       If the repair does not work, the thin pool LV and its thin LVs are lost.

       If metadata is manually restored with thin_repair directly, the pool metadata LV can be manually swapped with another LV containing new metadata:

       lvconvert --thinpool VG/ThinPoolLV --poolmetadata VG/NewThinMetaLV
 
@mira
Thank you for your help. I found a temporary solution for me. I added a new disk to the server and extended the group. After that I was able to repair the pool. Now have to find out how to clean everything and remove the 2nd disk.

Code:
Code:
root@nuc:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1  27   0 wz--n- <465.26g 2.18g
root@nuc:~# vgextend pve /dev/nvme0n1p1
  Volume group "pve" successfully extended
root@nuc:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   2  27   0 wz--n- <489.67g 26.59g
root@nuc:~# lvchange -an /dev/pve/data
root@nuc:~# lvconvert --repair /dev/pve/data
  Transaction id 878 from pool "pve/data" does not match repaired transaction id 877 from /dev/mapper/pve-lvol0_pmspare.
  WARNING: LV pve/data_meta5 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
  WARNING: New metadata LV pve/data_tmeta might use different PVs.  Move it with pvmove if required.
root@nuc:~# lvchange -ay /dev/pve/data
root@nuc:~# vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   2  26   0 wz--n- <489.67g <19.69g
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!