PVE Error deleting VM

dpsw12

Member
Dec 5, 2019
25
0
6
33
Hello all please help, i can't create vm in my proxmox.

image_2022-06-15_12-17-53.png
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotzM- 3.17t 57.50 100.00
root pve -wi-ao---- 96.00g
swap pve -wi-ao---- 8.00g
vm-100-disk-1 pve Vwi-aotz-- 850.00g data 99.96
vm-101-disk-1 pve Vwi-aotz-- 900.00g data 16.89
vm-102-disk-1 pve Vwi-aotz-- 300.00g data 99.99
vm-103-disk-1 pve Vwi-aotz-- 300.00g data 74.23
vm-104-disk-1 pve Vwi-aotz-- 150.00g data 100.00
vm-105-disk-1 pve Vwi-a-tz-- 150.00g data 39.64
 
Please post console output between CODE tags so formating will be preserved and tables get readable.

To me it looks like you can't create a VM because your LVM-Thin pool is full. Check how full your thin pool is. Go to your storge "local-lvm" and check the "usage" at the "summary" page.
 
Please post console output between CODE tags so formating will be preserved and tables get readable.

To me it looks like you can't create a VM because your LVM-Thin pool is full. Check how full your thin pool is. Go to your storge "local-lvm" and check the "usage" at the "summary" page.
no it's not full
1655283350682.png
i think it's the metadata but i could be wrong
Code:
root@devel:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotzM-   3.17t             57.50  100.00
  root          pve -wi-ao----  96.00g
  swap          pve -wi-ao----   8.00g
  vm-100-disk-1 pve Vwi-aotz-- 850.00g data        99.96
  vm-101-disk-1 pve Vwi-aotz-- 900.00g data        16.89
  vm-102-disk-1 pve Vwi-aotz-- 300.00g data        99.99
  vm-103-disk-1 pve Vwi-aotz-- 300.00g data        74.23
  vm-104-disk-1 pve Vwi-aotz-- 150.00g data        100.00
  vm-105-disk-1 pve Vwi-a-tz-- 150.00g data        39.64
 

Attachments

  • 1655283316249.png
    1655283316249.png
    31.9 KB · Views: 1
Jup, if you run lsblk you will see that your LVM thin pool consists of two LVs. One for data and one for metadata. Looks like your metadata LV is full. If your VG still got space you could use lvextend to double the size of that metadata LV. But don't forget to create backups first.
 
Jup, if you run lsblk you will see that your LVM thin pool consists of two LVs. One for data and one for metadata. Looks like your metadata LV is full. If your VG still got space you could use lvextend to double the size of that metadata LV. But don't forget to create backups first.
yes that's correct sir but unfortunately my vg is full how i can resolve this? should i reduce or any recommendation?
Code:
My vgdisplay

vgdisplay pve
  --- Volume group ---
  VG Name               pve
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  93
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                9
  Open LV               7
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               3.27 TiB
  PE Size               4.00 MiB
  Total PE              858392
  Alloc PE / Size       858392 / 3.27 TiB
  Free  PE / Size       0 / 0
  VG UUID               J2MeT1-JLlt-ok20-AmN2-jxm7-E4U2-1vZUPd
 
I thought you might be able to reduce the data LV of that thin pool but looks like that isn't working (atleast in 2018):
https://forum.proxmox.com/threads/reduce-local-lvm-pve-data-and-extend-root.48499/post-227258
second: Unfortunately LVM thin pools don't support reducing yet

Do you got some unallocated space on that disk? Usually the PVE installer will leave some space unallocated so it can for example be used by LVM snapshots.
You could check that with fdisk -l.
If there is unallocated space you could extend your partition and VG to be able to extend that LV.

But maybe you even don't need to increase that metadata LV. Possibly something is going wrong there that too much metadata is used. Maybe someone of the staff knows why you got so much metadata. I would guess the data to metadata ratio is well predictable so usually the metadata shouldn't be full first?
 
I thought you might be able to reduce the data LV of that thin pool but looks like that isn't working (atleast in 2018):
https://forum.proxmox.com/threads/reduce-local-lvm-pve-data-and-extend-root.48499/post-227258


Do you got some unallocated space on that disk? Usually the PVE installer will leave some space unallocated so it can for example be used by LVM snapshots.
You could check that with fdisk -l.
If there is unallocated space you could extend your partition and VG to be able to extend that LV.

But maybe you even don't need to increase that metadata LV. Possibly something is going wrong there that too much metadata is used. Maybe someone of the staff knows why you got so much metadata. I would guess the data to metadata ratio is well predictable so usually the metadata shouldn't be full first?
that's very unfortunate we can't reduce lvm thin pools.
Code:
Disk /dev/sda: 3.3 TiB, 3600629194752 bytes, 7032478896 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 786432 bytes
Disklabel type: gpt
Disk identifier: CD3C2323-6CD3-4D61-ADA3-78BAB0BF6C34

Device      Start        End    Sectors  Size Type
/dev/sda1    2048       4095       2048    1M BIOS boot
/dev/sda2    4096     528383     524288  256M EFI System
/dev/sda3  528384 7032478862 7031950479  3.3T Linux LVM


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 786432 bytes


Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 786432 bytes




Disk /dev/mapper/pve-vm--100--disk--1: 850 GiB, 912680550400 bytes, 1782579200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes
Disklabel type: dos
Disk identifier: 0x99e1057d

Device                                 Boot   Start        End    Sectors   Size Id Type
/dev/mapper/pve-vm--100--disk--1-part1 *       2048     999423     997376   487M 83 Linux
/dev/mapper/pve-vm--100--disk--1-part2      1001470 1782579199 1781577730 849.5G  5 Extended
/dev/mapper/pve-vm--100--disk--1-part5      1001472 1782579199 1781577728 849.5G 8e Linux LVM

Partition 2 does not start on physical sector boundary.


Disk /dev/mapper/pve-vm--101--disk--1: 900 GiB, 966367641600 bytes, 1887436800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes
Disklabel type: dos
Disk identifier: 0x99e1057d

Device                                 Boot   Start        End    Sectors   Size Id Type
/dev/mapper/pve-vm--101--disk--1-part1 *       2048     999423     997376   487M 83 Linux
/dev/mapper/pve-vm--101--disk--1-part2      1001470 1887436799 1886435330 899.5G  5 Extended
/dev/mapper/pve-vm--101--disk--1-part5      1001472 1887436799 1886435328 899.5G 8e Linux LVM

Partition 2 does not start on physical sector boundary.


Disk /dev/mapper/pve-vm--102--disk--1: 300 GiB, 322122547200 bytes, 629145600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes
Disklabel type: dos
Disk identifier: 0x99e1057d

Device                                 Boot   Start       End   Sectors   Size Id Type
/dev/mapper/pve-vm--102--disk--1-part1 *       2048    999423    997376   487M 83 Linux
/dev/mapper/pve-vm--102--disk--1-part2      1001470 629145599 628144130 299.5G  5 Extended
/dev/mapper/pve-vm--102--disk--1-part5      1001472 629145599 628144128 299.5G 8e Linux LVM

Partition 2 does not start on physical sector boundary.


Disk /dev/mapper/pve-vm--103--disk--1: 300 GiB, 322122547200 bytes, 629145600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes
Disklabel type: dos
Disk identifier: 0x99e1057d

Device                                 Boot   Start       End   Sectors   Size Id Type
/dev/mapper/pve-vm--103--disk--1-part1 *       2048    999423    997376   487M 83 Linux
/dev/mapper/pve-vm--103--disk--1-part2       999424 629145599 628146176 299.5G  5 Extended
/dev/mapper/pve-vm--103--disk--1-part5      1001472 629145599 628144128 299.5G 8e Linux LVM


Disk /dev/mapper/pve-vm--104--disk--1: 150 GiB, 161061273600 bytes, 314572800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes
Disklabel type: dos
Disk identifier: 0x99e1057d

Device                                 Boot   Start       End   Sectors   Size Id Type
/dev/mapper/pve-vm--104--disk--1-part1 *       2048    999423    997376   487M 83 Linux
/dev/mapper/pve-vm--104--disk--1-part2      1001470 314570751 313569282 149.5G  5 Extended
/dev/mapper/pve-vm--104--disk--1-part5      1001472 314570751 313569280 149.5G 8e Linux LVM

Partition 2 does not start on physical sector boundary.


Disk /dev/mapper/pve-vm--105--disk--1: 150 GiB, 161061273600 bytes, 314572800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 524288 bytes
Disklabel type: dos
Disk identifier: 0x00038d32

Device                                 Boot   Start       End   Sectors  Size Id Type
/dev/mapper/pve-vm--105--disk--1-part1 *       2048   2099199   2097152    1G 83 Linux
/dev/mapper/pve-vm--105--disk--1-part2      2099200 314572799 312473600  149G 8e Linux LVM
that's my fdisk -l output, i don't where to find unallocated space.

yes but what generate that metadata. no one knows why this metadata got full already.
 
Looks like all space is already allocated.

Last option would be to backup all guests, destroy that thin pool, manually create a new one through CLI with a metadata LV of double the size and restore the VM.
 
Looks like all space is already allocated.

Last option would be to backup all guests, destroy that thin pool, manually create a new one through CLI with a metadata LV of double the size and restore the VM.
I think backup from gui not possible becasuse that error, shoul i just copy the disk?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!