PVE Can't Delete Disk Error Failed to update pool pve/data

dpsw12

Member
Dec 5, 2019
25
0
6
33
Hello guys i have error in my proxmox and got this error
PVE ERROR DISK.PNG
PVE METADATA FULL.PNG
This is my disk setup
Code:
# pvs
  PV         VG  Fmt  Attr PSize PFree
  /dev/sda3  pve lvm2 a--  1.09t    0
# vgs
  VG  #PV #LV #SN Attr   VSize VFree
  pve   1  11   0 wz--n- 1.09t    0
# lvs
  LV            VG  Attr       LSize     Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotzM- <1013.29g             82.47  100.00
  root          pve -wi-ao----    96.00g
  swap          pve -wi-ao----     8.00g
  vm-100-disk-1 pve Vwi---tz--   200.00g data
  vm-101-disk-1 pve Vwi---tz--   200.00g data
  vm-102-disk-0 pve Vwi-aotz--     1.00g data        99.66
  vm-102-disk-1 pve Vwi-aotz--   150.00g data        96.32
  vm-103-disk-0 pve Vwi-aotz--   100.00g data        55.18
  vm-104-disk-0 pve Vwi-aotz--   100.00g data        73.26
  vm-105-disk-0 pve Vwi-a-tz--   150.00g data        73.21
  vm-106-disk-0 pve Vwi---tz--   250.00g data
# lsblk
NAME                         MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
sda                            8:0    0    1.1T  0 disk
├─sda1                         8:1    0      1M  0 part
├─sda2                         8:2    0    256M  0 part /boot/efi
└─sda3                         8:3    0    1.1T  0 part
  ├─pve-swap                 253:0    0      8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0     96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0    112M  0 lvm
  │ └─pve-data-tpool         253:4    0 1013.3G  0 lvm
  │   ├─pve-data             253:5    0 1013.3G  0 lvm
  │   ├─pve-vm--102--disk--1 253:7    0    150G  0 lvm
  │   ├─pve-vm--105--disk--0 253:9    0    150G  0 lvm
  │   ├─pve-vm--102--disk--0 253:10   0      1G  0 lvm
  │   ├─pve-vm--103--disk--0 253:11   0    100G  0 lvm
  │   └─pve-vm--104--disk--0 253:12   0    100G  0 lvm
  └─pve-data_tdata           253:3    0 1013.3G  0 lvm
    └─pve-data-tpool         253:4    0 1013.3G  0 lvm
      ├─pve-data             253:5    0 1013.3G  0 lvm
      ├─pve-vm--102--disk--1 253:7    0    150G  0 lvm
      ├─pve-vm--105--disk--0 253:9    0    150G  0 lvm
      ├─pve-vm--102--disk--0 253:10   0      1G  0 lvm
      ├─pve-vm--103--disk--0 253:11   0    100G  0 lvm
      └─pve-vm--104--disk--0 253:12   0    100G  0 lvm
sr0                           11:0    1   1024M  0 rom
# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.106-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-5
pve-kernel-helper: 6.4-5
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.106-1-pve: 5.4.106-1
pve-kernel-5.4.103-1-pve: 5.4.103-1
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.4.35-1-pve: 4.4.35-77
ceph-fuse: 12.2.13-pve1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.5-pve1~bpo10+1

Please help,
Thank you
 
Last edited:
Hi,
the pool's metadata is full. 117 MB is very little, the current installer uses 1% of pool size with minimum 1G, maximum 16G. Try extending the space for metadata with lvextend --poolmetadatasize +<size><unit> pve/data. Did you ever recreate or change the pool manually? On which version was the system originally installed?
 
Hi,
the pool's metadata is full. 117 MB is very little, the current installer uses 1% of pool size with minimum 1G, maximum 16G. Try extending the space for metadata with lvextend --poolmetadatasize +<size><unit> pve/data. Did you ever recreate or change the pool manually? On which version was the system originally installed?
When extend i got this error
Code:
# lvextend --poolmetadatasize +16G pve/data
  Insufficient free space: 4096 extents needed, but only 0 available
i never recreate or change the pool manually. my pve originally pve 5
 
When extend i got this error
Code:
# lvextend --poolmetadatasize +16G pve/data
  Insufficient free space: 4096 extents needed, but only 0 available
Right, because the volume group has no free space available. Sadly, it's not possible to lvreduce a thin pool yet. You can do one of:
  • swapoff /dev/pve/swap, then lvreduce -L -1G pve/swap and mkswap /dev/pve/swap, swapon /dev/pve/swap
  • add a new disk to the volume group
  • recreate the pool (ensure you have working backups for the VMs!)
  • risky!: reduce the root file system partition and then lvreduce the root LV
Afterwards it should be possible to extend the metadata.

i never recreate or change the pool manually. my pve originally pve 5
The default I mentioned in the installer seems to be from May 2018, so probably there wasn't a good enough default for older installations.
 
  • Like
Reactions: dpsw12
Right, because the volume group has no free space available. Sadly, it's not possible to lvreduce a thin pool yet. You can do one of:
  • swapoff /dev/pve/swap, then lvreduce -L -1G pve/swap and mkswap /dev/pve/swap, swapon /dev/pve/swap
  • add a new disk to the volume group
  • recreate the pool (ensure you have working backups for the VMs!)
  • risky!: reduce the root file system partition and then lvreduce the root LV
Afterwards it should be possible to extend the metadata.


The default I mentioned in the installer seems to be from May 2018, so probably there wasn't a good enough default for older installations.

ill try the swap first i think is the best one.
thank you, ill be back with the result.

it's working after reduce 1GB swap space and extending metadata again and now my metada not full.
right now im trying to fix the vm hdd error using gparted, i hope it's permanent.
i still cant delete vm disk, failed to update pool pve/data
Thanks again.
 
Last edited:
Right, because the volume group has no free space available. Sadly, it's not possible to lvreduce a thin pool yet. You can do one of:
  • swapoff /dev/pve/swap, then lvreduce -L -1G pve/swap and mkswap /dev/pve/swap, swapon /dev/pve/swap
  • add a new disk to the volume group
  • recreate the pool (ensure you have working backups for the VMs!)
  • risky!: reduce the root file system partition and then lvreduce the root LV
Afterwards it should be possible to extend the metadata.


The default I mentioned in the installer seems to be from May 2018, so probably there wasn't a good enough default for older installations.
Hello sir can you help, i got this error when creating new vm
Code:
Thin pool pve-data-tpool (253:4) transaction_id is 49, while expected 50.

TASK ERROR: unable to create VM 100 - lvcreate 'pve/vm-100-disk-0' error:   Failed to suspend pve/data with queued messages.
 
Hello sir can you help, i got this error when creating new vm
Code:
Thin pool pve-data-tpool (253:4) transaction_id is 49, while expected 50.

TASK ERROR: unable to create VM 100 - lvcreate 'pve/vm-100-disk-0' error:   Failed to suspend pve/data with queued messages.

Make sure you have working backups of everything and try:
Code:
lvchange -an pve/data
lvconvert --repair  pve/data
lvchange -ay pve/data
If it still complains about the pool being active, try lvchange -an pve instead (or use a live CD). Also, you might need to free up additional space in the volume group for the repair command to work properly.

If the above doesn't work either, you can still try to follow the final section here.
 
Make sure you have working backups of everything and try:
Code:
lvchange -an pve/data
lvconvert --repair  pve/data
lvchange -ay pve/data
If it still complains about the pool being active, try lvchange -an pve instead (or use a live CD). Also, you might need to free up additional space in the volume group for the repair command to work properly.

If the above doesn't work either, you can still try to follow the final section here.
hello sir this is the result of command above
Code:
root@pve:~# lvchange -an pve/data
root@pve:~# lvconvert --repair pve/data
  Active pools cannot be repaired.  Use lvchange -an first.
root@pve:~# lvh
-bash: lvh: command not found
root@pve:~# lvchange -an pve
  Logical volume pve/swap in use.
  Logical volume pve/root contains a filesystem in use.
  Logical volume pve/vm-102-disk-1 in use.
  Logical volume pve/vm-102-disk-0 in use.
  Logical volume pve/vm-103-disk-0 in use.
  Logical volume pve/vm-104-disk-0 in use.
  Device pve-data_tdata (253:3) is used by another device.
  Device pve-data_tmeta (253:2) is used by another device.
root@pve:~# lvconvert --repair pve/data
  Active pools cannot be repaired.  Use lvchange -an first.
root@pve:~# lvconvert --repair pve
  Cannot find VG name for LV pve.
root@pve:~# lvchange -ay pve
root@pve:~#
i cant use live cd because a server in data center.
after following the link you gave me and refer to this section "Fixing Transaction ID Mismatch"
when i edit the backup up file i didn't find the transaction_id matching my problems here's the result
Code:
Error that i got :
Thin pool pve-data-tpool (253:4) transaction_id is 49, while expected 50.
TASK ERROR: unable to create VM 100 - lvcreate 'pve/vm-100-disk-0' error:   Failed to suspend pve/data with queued messages.

Backup File :
# Generated by LVM2 version 2.03.02(2) (2018-12-18): Wed Sep  1 16:03:08 2021

contents = "Text Format Volume Group"
version = 1

description = "vgcfgbackup pve -f /home/backup"

creation_host = "pve"    # Linux pve 5.4.106-1-pve #1 SMP PVE 5.4.106-1 (Fri, 19 Mar 2021 11:08:47 +0100) x86_64
creation_time = 1630486988    # Wed Sep  1 16:03:08 2021

pve {
    id = "xjtxa2-JhNI-NjT2-RerQ-32bM-743Q-4O3K1f"
    seqno = 114
    format = "lvm2"            # informational
    status = ["RESIZEABLE", "READ", "WRITE"]
    flags = []
    extent_size = 8192        # 4 Megabytes
    max_lv = 0
    max_pv = 0
    metadata_copies = 0

    physical_volumes {

        pv0 {
            id = "0jPQ4w-WVMQ-iTn1-h9Kj-VlsU-5plf-IPxzE0"
            device = "/dev/sda3"    # Hint only

            status = ["ALLOCATABLE"]
            flags = []
            dev_size = 2343583631    # 1.09132 Terabytes
            pe_start = 2048
            pe_count = 286081    # 1.09131 Terabytes
        }
    }

    logical_volumes {

        swap {
            id = "jwrNsH-HUPG-lIMc-ny15-1fUY-xlhs-tmWjpx"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1524737761    # 2018-04-26 17:16:01 +0700
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 1536    # 6 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 0
                ]
            }
        }

        root {
            id = "s3SIBx-yOpX-gBSf-AAbb-c9EC-pm6w-OuyqeD"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1524737761    # 2018-04-26 17:16:01 +0700
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 24576    # 96 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 2048
                ]
            }
        }

        data {
            id = "BvSwXF-x8Sy-wOB7-uPGP-JFHZ-UKvZ-ofe7aG"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1524737762    # 2018-04-26 17:16:02 +0700
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 259401    # 1013.29 Gigabytes

                type = "thin-pool"
                metadata = "data_tmeta"
                pool = "data_tdata"
                transaction_id = 51
                chunk_size = 512    # 256 Kilobytes
                discards = "passdown"
                zero_new_blocks = 1

                message1 {
                    delete = 4
                }
            }
        }

        vm-101-disk-1 {
            id = "yClBlm-3nFn-tJ9g-ime3-zTg4-OQYC-35BK10"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1524830883    # 2018-04-27 19:08:03 +0700
            creation_host = "pve"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 51200    # 200 Gigabytes

                type = "thin"
                thin_pool = "data"
                transaction_id = 13
                device_id = 2
            }
        }

        vm-102-disk-1 {
            id = "mHoFpN-wVyS-MbCU-Dg8a-T1Ah-33IQ-kNcVtn"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1532058990    # 2018-07-20 10:56:30 +0700
            creation_host = "pve"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 38400    # 150 Gigabytes

                type = "thin"
                thin_pool = "data"
                transaction_id = 14
                device_id = 3
            }
        }

        vm-105-disk-0 {
            id = "Yr8t9O-KBcW-Q2kI-gpdv-2DgH-mrKm-AZqKKP"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1584537301    # 2020-03-18 20:15:01 +0700
            creation_host = "pve"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 38400    # 150 Gigabytes

                type = "thin"
                thin_pool = "data"
                transaction_id = 33
                device_id = 7
            }
        }

        vm-102-disk-0 {
            id = "u1mrEq-3eSC-YNH7-blZq-ry8K-xWFR-T6Ilhd"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1585724746    # 2020-04-01 14:05:46 +0700
            creation_host = "pve"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 256    # 1024 Megabytes

                type = "thin"
                thin_pool = "data"
                transaction_id = 34
                device_id = 8
            }
        }

        vm-103-disk-0 {
            id = "nwNHIA-rnci-XcvY-xvaq-gD2S-YWcj-cZKjTX"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1618372649    # 2021-04-14 10:57:29 +0700
            creation_host = "pve"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 25600    # 100 Gigabytes

                type = "thin"
                thin_pool = "data"
                transaction_id = 39
                device_id = 9
            }
        }

        vm-104-disk-0 {
            id = "f6abVJ-PZD4-d4sj-ohLM-vNU6-lDIU-i4qMXZ"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1618376953    # 2021-04-14 12:09:13 +0700
            creation_host = "pve"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 25600    # 100 Gigabytes

                type = "thin"
                thin_pool = "data"
                transaction_id = 42
                device_id = 10
            }
        }

        vm-106-disk-0 {
            id = "nAbX31-cWLP-b1Oe-qVlb-gtt8-fle4-Ryyh1B"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1627019938    # 2021-07-23 12:58:58 +0700
            creation_host = "pve"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 64000    # 250 Gigabytes

                type = "thin"
                thin_pool = "data"
                transaction_id = 45
                device_id = 11
            }
        }

        data_tdata {
            id = "rdUPfk-uL8X-VFJI-bfRt-AmBO-NMx9-iLc0qS"
            status = ["READ", "WRITE"]
            flags = []
            creation_time = 1524737761    # 2018-04-26 17:16:01 +0700
            creation_host = "proxmox"
            segment_count = 2

            segment1 {
                start_extent = 0
                extent_count = 112289    # 438.629 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 26624
                ]
            }
            segment2 {
                start_extent = 112289
                extent_count = 147112    # 574.656 Gigabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 138969
                ]
            }
        }

        data_tmeta {
            id = "SEDxHz-FKkv-SbF2-HEmM-8LaE-0gfQ-jqe27N"
            status = ["READ", "WRITE"]
            flags = []
            creation_time = 1524737762    # 2018-04-26 17:16:02 +0700
            creation_host = "proxmox"
            segment_count = 2

            segment1 {
                start_extent = 0
                extent_count = 28    # 112 Megabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 138913
                ]
            }
            segment2 {
                start_extent = 28
                extent_count = 256    # 1024 Megabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 1792
                ]
            }
        }

        lvol0_pmspare {
            id = "57PR8i-8nR6-xu2H-wPI0-4IFs-rqoq-s97eAF"
            status = ["READ", "WRITE"]
            flags = []
            creation_time = 1524737762    # 2018-04-26 17:16:02 +0700
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 28    # 112 Megabytes

                type = "striped"
                stripe_count = 1    # linear

                stripes = [
                    "pv0", 138941
                ]
            }
        }
    }

}
 
Last edited:
hello sir this is the result of command above
Code:
root@pve:~# lvchange -an pve/data
root@pve:~# lvconvert --repair pve/data
  Active pools cannot be repaired.  Use lvchange -an first.
root@pve:~# lvh
-bash: lvh: command not found
root@pve:~# lvchange -an pve
  Logical volume pve/swap in use.
  Logical volume pve/root contains a filesystem in use.
  Logical volume pve/vm-102-disk-1 in use.
  Logical volume pve/vm-102-disk-0 in use.
  Logical volume pve/vm-103-disk-0 in use.
  Logical volume pve/vm-104-disk-0 in use.
  Device pve-data_tdata (253:3) is used by another device.
  Device pve-data_tmeta (253:2) is used by another device.
[/QUOTE]
You need to first shut down all of the VMs still using those disks.

[QUOTE="dpsw12, post: 414284, member: 81857"]

root@pve:~# lvconvert --repair pve/data
  Active pools cannot be repaired.  Use lvchange -an first.
root@pve:~# lvconvert --repair pve
  Cannot find VG name for LV pve.
root@pve:~# lvchange -ay pve
root@pve:~#
i cant use live cd because a server in data center.
I'd suggest waiting for the next maintenance window then.

from the link you gave me should i follow this section "Fixing Transaction ID Mismatch"
Yes, but I wouldn't try it while the system is in production.
 
I'd suggest waiting for the next maintenance window then.


Yes, but I wouldn't try it while the system is in production.
if using live cd i should mount the disk and do the reapir? and which this should i repair pve or pve/data?

can you check my edited post, i tried fixing transaction_id but didn't find matching transaction_id
 
if using live cd i should mount the disk and do the reapir? and which this should i repair pve or pve/data?
No need to mount anything, but the LVs should show up. It should be enough to repair pve/data or did you have any issues with pve/root too?

can you check my edited post, i tried fixing transaction_id but didn't find matching transaction_id
after following the link you gave me and refer to this section "Fixing Transaction ID Mismatch"
when i edit the backup up file i didn't find the transaction_id matching my problems here's the result
Code:
Error that i got :
Thin pool pve-data-tpool (253:4) transaction_id is 49, while expected 50.
TASK ERROR: unable to create VM 100 - lvcreate 'pve/vm-100-disk-0' error:   Failed to suspend pve/data with queued messages.



        data {
            id = "BvSwXF-x8Sy-wOB7-uPGP-JFHZ-UKvZ-ofe7aG"
            status = ["READ", "WRITE", "VISIBLE"]
            flags = []
            creation_time = 1524737762    # 2018-04-26 17:16:02 +0700
            creation_host = "proxmox"
            segment_count = 1

            segment1 {
                start_extent = 0
                extent_count = 259401    # 1013.29 Gigabytes

                type = "thin-pool"
                metadata = "data_tmeta"
                pool = "data_tdata"
                transaction_id = 51
                chunk_size = 512    # 256 Kilobytes
                discards = "passdown"
                zero_new_blocks = 1

                message1 {
                    delete = 4
                }
            }
        }
Was the thin pool still being used in between the lvcreate and the vgcfgbackup commands? The vgcfgbackup+editing+vgcfgrestore should also only be used after lvchange -an pve. And since the root partition is there, this has to be done from a live CD.
 
No need to mount anything, but the LVs should show up. It should be enough to repair pve/data or did you have any issues with pve/root too?



Was the thin pool still being used in between the lvcreate and the vgcfgbackup commands? The vgcfgbackup+editing+vgcfgrestore should also only be used after lvchange -an pve. And since the root partition is there, this has to be done from a live CD.

im not found anything strange with pve/root so far, so i guess i just have to fix pve/data.


yes it's still being used. yes my only option is using live cd. i hope it goes well, thanks for your help. i'll try when the next visit scheduled.
 
No need to mount anything, but the LVs should show up. It should be enough to repair pve/data or did you have any issues with pve/root too?



Was the thin pool still being used in between the lvcreate and the vgcfgbackup commands? The vgcfgbackup+editing+vgcfgrestore should also only be used after lvchange -an pve. And since the root partition is there, this has to be done from a live CD.
Hello sir, i tried that command from live cd using lvchange -an pve first then i did vgcfgbackup+editing+vgcfgrestore and the i run again lvrepair after reboot proxmox is not booting. what i miss?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!