[SOLVED] remove lvm-thinpool

150d

Member
Mar 23, 2022
28
3
8
Hi,

I can't get rid of a "LVM-Thinpool" on a Proxmox node. The node has two SSDs, one NVME and one SATA:

/dev/nvme0n1 contains three partitions, "BIOS boot", "EFI" and a large "LVM" where basically everything is stored - VMs, images, backups.

/dev/sda shows no partitions under "Disks". The disk is supposed to be empty.


Under "Disks/LVM" I can see pve->/dev/nvme0n1p3. That's fine.

Under "Disks/LVM-Thin" I can see one entry:

name: data
volume group: pve

How can I remove this?

If I choose "More/Destroy", the command fails: "command lvremove -y pve/data failed: exit code 5"

If I try the same in a shell I get this message:

"Removing pool "data" will remove 4 dependent volume(s). Proceed?"


These "4 dependent volumes" apparently are VM disks that are supposed to be on the LVM on /dev/nvme0n1p3, not in this Thinpool. They are in use, and that's probably why the command to delete them failed.


How can remove this obscure "data" Thinpool without destroying anything else?


Regards


PS: Sorry, I forgot to describe how the situation was created:

The "LVM" on /dev/nvme0n1p3 was in use for VMs.

For additional storage, my intention was to create a LVM-Thinpool on /dev/sda. (Calling it "data" may have been a mistake - is this name already "taken"?)

What I fail to understand is how this new Thinpool not only ended up on /nvme instead of on /sda, but how it got itself entangled with the LVM already existing on /nvme.
 
Last edited:
Hi,
naming the other thin pool data shouldn't be an issue, because the volume group name should be different. Please share the output of pvs, vgs and lvs.
 
Hi,
naming the other thin pool data shouldn't be an issue, because the volume group name should be different.
Great. I was worried about that one.

Please share the output of pvs, vgs and lvs.
Of course:

root@host:~# pvs PV VG Fmt Attr PSize PFree /dev/nvme0n1p3 pve lvm2 a-- <476.44g <16.00g root@host:~# vgs VG #PV #LV #SN Attr VSize VFree pve 1 7 0 wz--n- <476.44g <16.00g root@host:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data pve twi-aotz-- <349.31g 0.81 0.51 root pve -wi-ao---- 96.00g swap pve -wi-ao---- 8.00g vm-100-disk-0 pve Vwi-aotz-- 40.00g data 7.05 vm-101-disk-0 pve Vwi-a-tz-- 4.00m data 14.06 vm-101-disk-1 pve Vwi-a-tz-- 4.00m data 0.00 vm-101-disk-2 pve Vwi-a-tz-- 64.00g data 0.00

Regards
 
Well, you did (re)create the thin pool in the pve volume group and not on /dev/sda. You should be able to:
  1. Create a thinpool on /dev/sda. Either via UI in [Node] > Disks > LVM Thin > Create: Thinpool. Or via CLI create a volume group that owns /dev/sda with pvcreate and vgcreate and then create a thinpool as described here, replacing the volume group name and size accordingly of course.
  2. Move the volumes to the new storage (in the guest's Hardware view there's Disk Action > Move Storage).
  3. After all volumes are moved, you should be able to remove pve/data.
 
Well, you did (re)create the thin pool in the pve volume group and not on /dev/sda.
Could you explain this a little deeper, please? I really don't understand what happened:

I was using the GUI all along. The [node]/Disks/LVM table showed the NVME, while the [node]/Disks/LVM-Thin was empty.

In [host]/Disks/LVM-Thin I selected "create thinpool" and chose "Disk: /dev/sda" with "Name: data".

Where did it go so wrong?
(Did I maybe select the NVME in "Disk:" while creating the thinpool by mistake, would that explain it?)

You should be able to:
That would move the existing files somewhere else (e.g. /dev/sda), then delete "data" on the NVME, then recreate "data" on the NVME and move the files back, did I understand this correctly?

Regards
 
In [host]/Disks/LVM-Thin I selected "create thinpool" and chose "Disk: /dev/sda" with "Name: data".

Where did it go so wrong?
(Did I maybe select the NVME in "Disk:" while creating the thinpool by mistake, would that explain it?)
If you select /dev/sda there, it should create the thin pool on that disk of course.
 
I followed your advice: All my VM disks are now on the new thinpool on /dev/sda, and the "data" thinpool is removed.

How can I now reuse the space again?

If I try to add a LVM-Thin and select volume group pve, I can't select anything for "Thinpool" - apparently the group pve doesn't have any thinpool any more. When I try to create a new Thinpool, the message is "No Disks unused".

Adding "LVM" (non-Thin) would probably work, at least the dialog doesn't show an error.

Regards
 
I followed your advice: All my VM disks are now on the new thinpool on /dev/sda, and the "data" thinpool is removed.

How can I now reuse the space again?
The partition still belongs to the volume group pve and it should, because the root LV is on there ;)

If I try to add a LVM-Thin and select volume group pve, I can't select anything for "Thinpool" - apparently the group pve doesn't have any thinpool any more. When I try to create a new Thinpool, the message is "No Disks unused".
Yes, the UI doesn't allow adding a thin pool to an already existing volume group.

Adding "LVM" (non-Thin) would probably work, at least the dialog doesn't show an error.
Hmm, it should have the same filter and also consider the device as used AFAIK. Or do you mean adding the volume group pve as an LVM storage in Datacenter > Storage? Yes, that would be a good way to make the space usable.

Another way would be to extend the root LV with lvextend to make the local storage bigger, but then you have less separation.

And if you prefer a file-based storage you could also create a new LV in the pve volume group, and a filesystem on it, and mount that.
 
I now proceeded along those lines:

- created a new thinpool on the CLI
- added a "LVM-Thin" via the GUI

-> working fine.

Thanks for your help!

Regards
 
I now proceeded along those lines:

- created a new thinpool on the CLI
- added a "LVM-Thin" via the GUI

-> working fine.

Thanks for your help!

Regards
Great! Please mark the thread as [SOLVED] by editing the first post/thread and selecting the prefix. This helps other users find solutions more quickly.
 
Hi, I created this LVM-Thin disk to gain some experience (I'm a beginner), I put some VMs inside which I then destroyed.
Now I try to delete HD1 (pve) but I can't, I think I've made a big mistake!
Could you help me please?
Is there a cli command to delete HD1 (pve)?
Thank you, kind regards.

LVM-Thin eliminata.png
 
Hi,
how exactly did you destroy the thin pool? Did you wipe the disk? Or did you use the Destroy button (in the More menu in your screenshot)? In the CLI, you can use pvs, lvs to check if the thin pool still exists. If not, it should be enough go to Datacenter > Storage and remove the left-over from there.
 
Hi,
how exactly did you destroy the thin pool? Did you wipe the disk? Or did you use the Destroy button (in the More menu in your screenshot)? In the CLI, you can use pvs, lvs to check if the thin pool still exists. If not, it should be enough go to Datacenter > Storage and remove the left-over from there.
Thanks, it was very simple, I did as you suggested:
Datacenter, Storage, Remove.
Thank you, kind regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!