Metadata Size

JoeHanson

New Member
May 7, 2020
7
0
1
23
Hello!
My problem is, that i wanted to make a backup of my vm and got the error message: Backup of VM 100 failed - lvcreate snapshot 'raid1/snap_vm-100-disk-0_vzdump' error: Cannot create new thin volume, free space in thin pool raid1/vmstorage reached threshold

So I think the reason for this is that my Metadata size is only 100 MiB with a 2.73TiB HDD Raid1.
If i want to resize the Metadata Size with this command lvresize --poolmetadatasize +1G /dev/raid1/vmstorage I get the error message: Insufficient free space: 256 extents needed, but only 0 available

I think that's a big problem isn't it?

I hope anyone has a solution for this and thanks in advance.
JoeHanson
 
Hi!

Have you been able to solve this? What is the output of lvdisplay?
 
Thanks for reply,
no I haven't found a solution for this; that's the output of lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID hPMfLT-Hdub-w9nU-YhIa-7SHJ-Fn1y-piluSg
LV Write Access read/write
LV Creation host, time proxmox, 2020-04-13 13:15:37 +0200
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID Gvbrgc-PG0z-hK3h-OeD1-dXCn-PRsx-sPPbZo
LV Write Access read/write
LV Creation host, time proxmox, 2020-04-13 13:15:37 +0200
LV Status available
# open 1
LV Size 27.75 GiB
Current LE 7104
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

--- Logical volume ---
LV Name data
VG Name pve
LV UUID zHDvpN-eHx4-pEDz-5vos-Zj1l-THEz-KN0QzV
LV Write Access read/write
LV Creation host, time proxmox, 2020-04-13 13:15:38 +0200
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 1
LV Size 59.66 GiB
Allocated pool data 0.00%
Allocated metadata 1.59%
Current LE 15274
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

--- Logical volume ---
LV Name vmstorage
VG Name raid1
LV UUID Zy45iT-mPgH-5Jra-rMJx-VO3B-8yzD-nKRc6C
LV Write Access read/write
LV Creation host, time proxmox, 2020-04-14 18:22:29 +0200
LV Pool metadata vmstorage_tmeta
LV Pool data vmstorage_tdata
LV Status available
# open 2
LV Size <2.73 TiB
Allocated pool data 4.99%
Allocated metadata 89.19%
Current LE 715314
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8

--- Logical volume ---
LV Path /dev/raid1/vm-100-disk-0
LV Name vm-100-disk-0
VG Name raid1
LV UUID 3EysJq-vQdd-2K9m-QhQy-ODQ0-ljr3-m7HVEq
LV Write Access read/write
LV Creation host, time proxmox, 2020-04-20 18:12:53 +0200
LV Pool name vmstorage
LV Status available
# open 1
LV Size 1.00 TiB
Mapped size 13.62%
Current LE 262144
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:10
 
If you do
Code:
vgdisplay raid1
what does it say about PE?
 
--- Volume group ---
VG Name raid1
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 35
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <2.73 TiB
PE Size 4.00 MiB
Total PE 715364
Alloc PE / Size 715364 / <2.73 TiB
Free PE / Size 0 / 0
VG UUID pMxGfR-eajR-OrtF-3Tem-JkH6-sx1F-rdYCzI
 
I think too, how I see it is the only solution to del the vmstorage pool and create a new one with more metadata size or? Or is it possible to shrink the pool?

Here's a small overview:
Code:
root@proxmox:~# vgs
  VG    #PV #LV #SN Attr   VSize    VFree 
  pve     1   3   0 wz--n- <111.29g 13.87g
  raid1   1   2   0 wz--n-   <2.73t     0 
root@proxmox:~# pvs
  PV         VG    Fmt  Attr PSize    PFree 
  /dev/md0   raid1 lvm2 a--    <2.73t     0 
  /dev/sda3  pve   lvm2 a--  <111.29g 13.87g
root@proxmox:~# lvs
  LV            VG    Attr       LSize  Pool      Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve   twi-aotz-- 59.66g                  0.00   1.59                            
  root          pve   -wi-ao---- 27.75g                                                         
  swap          pve   -wi-ao----  8.00g                                                         
  vm-100-disk-0 raid1 Vwi-a-tz--  1.00t vmstorage        13.62                                  
  vmstorage     raid1 twi-aotz-- <2.73t                  4.99   89.20
 
Last edited:
So first of all, now would certainly be a good time for a backup. Take a look at some commands like pvresize. It might be possible without deleting.
 
In cause of the error message when I want to use the backup function of proxmox, is the simplest way when I use cp /dev/raid1/vm-100-disk-0 /media/hdd/ (in /media/hdd is an external drive mounted) or? I know I have to copy the whole 1 TiB disk but I think it's only option.
 
Have you tried another backup mode already? You can also add your drive as storage in Proxmox VE and then try to perform the backup from the GUI.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!