[SOLVED] Resizing pve data

gdi2k

Active Member
Aug 13, 2016
83
1
28
During Proxmox VE 4.3 install to a 240 GB SSD drive, default install parameters were used, so we ended up with a pve data volume of around 150 GB:

Code:
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                KFelnS-3YiA-cUzZ-hemx-eK3r-LzwB-eFw2j4
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-11-16 17:13:29 +0800
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 2
  LV Size                151.76 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.45%
  Current LE             38851
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:4

This is a waste of space as we store our images on Ceph, and we can't use this space for backups (can only store raw images on local-lvm).

I would like to resize this partition so I can create an additional ext4 partition that we can use for backups.

I tried using lvresize, but get:
Code:
root@prox1:/var/lib/vz# lvresize /dev/pve/data -L 50G
  Thin pool volumes cannot be reduced in size yet.
  Run `lvresize --help' for more information.

How can I reclaim this wasted space? pve data is empty and serves no purpose in our install. Can it be deleted without causing carnage?
 
If there is no data on the thin pool, you can simple delete it (and the associated local-lvm storage). Then do whatever you like with the free lvm space.
 
Hello dietmar and gdi2k. Can i delete the local-lvm (lvm-thin) directly in the web gui? If not could you please walk me throught to achieve that. Thanks
 
Hi jacmel,

I first deleted the storage entry "local-lvm" from the web GUI (under Storage), then I deleted the underlying volume from the command line with lvremove. You can use tab-complete after lvremove. It should be:

Code:
lvremove /dev/pve/data

I think.
 
Hi gdi2k finally i use this instructions and worked perfect.
https://pve.proxmox.com/wiki/Installation:_Tips_and_Tricks
#Optional:_Reverting_Thin-LVM_to_.22old.22_Behavior_of_.2Fvar.2Flib.2Fvz_.28Proxmox_4.2_and_later.29
After that i erased the storage entry "local-lvm" from the web GUI (under Storage).
Now i have all my space available.
Thanks a lot for your answer.
 
I'm sorry but it seems the original TS question was: "Is there any method of just reducing the size of the pve/data volume?"

I tried the same way in PVE 5.0-23/af4267bf - still the same message

So is it?
 
@dietmar @DomKnigi: Did you find a solution? We have the same problem still over here and the only thing I found did not seem confidence inspiring: www dot redhat dot com/archives/linux-lvm/2014-March/msg00020 dot html

Deleting the entire pve/data is not an option since there are VMs and LCX containers on it that we definitely need. We need to make pve/root bigger and there is not sufficient free space left that is not allocated to pve/data while we have more than enough free space on pve/data.

Any help or news would be greatly appreciated...

Greetings from Germany

goldsteal
 
Last edited:
  • Like
Reactions: SawKyrom
I have 5pc's cluster and I want to make ceph storage with 3 of pcs and I`m fighting already several days with this problem...lvresize don`t want to make shrink existing storage (with one offline VM).
I`ve found only one advice...erase data pool and make another smaller. But I`d like to make it without data loss (ok I can backup data to NAS). We have 2019 year...what`s the reason of this behaviour? I`ve read it is for snapshot in disk (I have no one) and it seems noboby knows the main reason.:(
Note: I have data as thin lvm.
 
  • Like
Reactions: Rafael Solano
It seems we have 2 cases...it is runing properly (e.g my friend said me...I tried it several times (not in Proxmox) and I have no problem) or it`s not running.
We are unlucky on the other side.
Note: I tried the same procedure with thick lvm...and it`s working. I think the thin lvm is unable to shrink.
 
Last edited:
I don't see an answer here, but couldn't you just backup the whole LVM thin partition to a bulk drive, delete it, resize the system LVM, then recreate the LVM thin and restore the data? 240GB isn't that much to backup/restore these days. Unless you're trying to do all of this with the LVM thin partition online.
 
I don't see an answer here. Is this still not possible? I'm running PVE 7.1-12 in a development / hobby environment but I don't want to lose my VMs. Is there a way to boot from USB, i.e. with no VGs or LVs mounted and then resize?

I've been using this setup for 6+ months but now the 100G of /dev/pve/root space is too restrictive and I have well over a TB of unused space because the default install ate my whole 2TB nvme drive.

Code:
> vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID           
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  70
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                19
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <1.82 TiB
  PE Size               4.00 MiB
  Total PE              476803
  Alloc PE / Size       472612 / 1.80 TiB
  Free  PE / Size       4191 / 16.37 GiB
  VG UUID               fdRKNk-6Gv1-jLyD-oGvv-wNv6-5rEQ-9FYU5C

Note the VG Status is resizable. That's not very meaningful if I cannot shrink a LV within it.

An acceptable alternative for me would be to create a LV with /dev/pve/data and mount it on the host. Is that an easier ask? ...


UDPATE:
I ended up deleting all LVs (12 disk images and data), resizing root, restoring VMs from backup. It seems to have worked. Here's my code, if anyone's interested.

Bash:
#delete
lvremove /dev/pve/vm-100-disk-0 -y
...
lvremove /dev/pve/data -y

#resize
lvresize -L +1400G /dev/pve/root
resize2fs /dev/pve/root

#restore
lvcreate -L 358G -n data pve
lvconvert --type thin-pool pve/data
qmrestore vzdump-qemu-100-2022_04_15-20_01_09.vma.zst 100 --force
...
 
Last edited:
If there is no data on the thin pool, you can simple delete it (and the associated local-lvm storage). Then do whatever you like with the free lvm space.
okay, but still need a hint
I did the deletion of the "local-lvm" by using:
... lvremove pve/data

also resized the lvm by using:
.... lvextend -L+8G pve/root

so it looks like expected:
... LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
... root pve -wi-ao---- <16.34g
... swap pve -wi-ao---- <2.38g

BUT how do I let PVE know that changes?
GUI still shows the old site for "local", "local-lvm" is gone - as expeted

How can I propagate the changes to lvm?

Thanks
Ole
 
Last edited:
okay, after quick search - I needed to exted the fs (of course) so:
... resize2fs /dev/mapper/pve-root

done
 
Hi all!
I'd like to dd clone from 2TB RAID to the 512GB NVMe an installation of PVE.
I don't have any VMs on it, so I just need to preserve it's configuration and settings.
All my VMs and LXCs are on the second 1TB SSD drive (lvmthin volume).

What would be the best way to do this?

Is an solution to delete an "local" PVE storage and then dd clone the /dev/sda to nvme?
Or?

If I don't shrink the sda that is 2TB (used only 100GB) to less then 512GB, will dd still clone to the 512GB NVMe drive? Or I need to shrink it first?

This is how it looks:
https://prnt.sc/Bi1ZmUOYzc46
1657548062255.png
https://prnt.sc/QA2aVtddf35W
1657548115105.png

Thanks a lot!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!