Correct steps to reduce pve/data?


Apr 30, 2023

I have a server with a single disk on which I just installed promox.
I need to reduce the size of pve/data but I am unsure how I should do that.

The reason I need to do that is because I'd like some extra free space for e.g. pictures that I want to share with my VM's so that I can use photoprism.

From what I can see online I need to use resize2fs first and then lvresize. It appears I need to unmount the pve/data partition first, but I am unable to find that mountpoint.

root@pve:/dev/mapper# pvs
  PV             VG  Fmt  Attr PSize   PFree
  /dev/nvme0n1p3 pve lvm2 a--  893.25g 16.00g
root@pve:/dev/mapper# vgs
  VG  #PV #LV #SN Attr   VSize   VFree
  pve   1   7   0 wz--n- 893.25g 16.00g
root@pve:/dev/mapper# lvs
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  base-103-disk-0 pve Vri---tz-k   8.00g data                                            
  data            pve twi-aotz-- 757.78g             0.52   0.26                        
  root            pve -wi-ao----  96.00g                                                
  swap            pve -wi-ao----   8.00g                                                
  vm-100-disk-0   pve Vwi-a-tz--   8.00g data        9.44                                
  vm-101-disk-0   pve Vwi-a-tz--   8.00g data        10.44                              
  vm-102-disk-0   pve Vwi-a-tz--   5.00g data        15.35                              

I can see the root mount, but not the data mount:

root@pve:/dev/mapper# mount|grep data
root@pve:/dev/mapper# mount|grep root
/dev/mapper/pve-root on / type ext4 (rw,noatime,errors=remount-ro,stripe=64)
/dev/mapper/pve-root on /var/hdd.log type ext4 (rw,noatime,errors=remount-ro,stripe=64)
/dev/mapper/pve-root on /var/lib/hdd.rrdcached type ext4 (rw,noatime,errors=remount-ro,stripe=64)

So what are the steps I need to take to reduce pve/data?
Last edited:
pve/data is a thin pool [1], it is not directly mounted.

LVM only supports the extension of thin pools, you cannot shrink them. A workaround could be creating a volume on the thin pool and then mounting it to your root file system.

Otherwise, you would have to backup the contents of the thin pool and then delete and recreate it with less space. Then restore from the backups.

Thank you for your reply. I removed the volume and created a new one with a smaller size:

lvcreate -L 300G -n data pve
lvconvert --type thin-pool pve/data

Afterwards I had to readd the volume to proxmox:

pvesm add lvmthin local-lvm -thinpool data -vgname pve -content rootdir,images

root@pve:/var/lib/vz# lvs
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-a-tz-- 300.00g             0.00   10.43                           
  root pve -wi-ao----  96.00g                                                   
  swap pve -wi-ao----   8.00g

Do I now create another volume with vcreate which I then can add to my fstab?

/dev/pve/mynewvol /mount/mynewvol ext4 errors=remount-ro,noatime 0 1

Or how do I proceed from here?
Do I now create another volume with vcreate which I then can add to my fstab?

Yes, create a new volume with lvcreate and then create the filesystem you want on it via mkfs.

Then you should be able to add it to the fstab file. For LVs created in LVM it is preferable to use the path to the device located in /dev/mapper/<lv>. Alternatively you could create a systemd mount unit [1], but that is a bit more complicated.

Thanks, that worked. I added the volume, created the fs and added it to the storage.conf.

Then I realized it doesn't do what I want :D But I learned a lot, so thanks a lot.

I thought this would be a way to have files accessible from VMs/LXCs and the host since I am using borg backup to create offsite backups.

I'll need to read up on how to restructure my backup strategy.
Ah sorry, I should have noticed what you want to achieve from your posts - now everything seem quite clear in retrospect. For sharing files between VMs you could use an NFS or SMB server.
Thanks, that's a great idea.
I might just add my backup script to my container template, I'll have to sleep on what I'll do, there are so many options :)
@shanreich I did a similar mistake.

I bought a new disk, create a new PV, add it to the existing vg (pve) and increase the existing LV(data). However I realized my table partition was incorrect as I created a dos partition limited to 2T (my disk is 4T). So I'd like to recreate it as GPT in order to create 2 partitions .

How can I do so without breaking my LVM:

root@nuc:/home/philippe# pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <931,01g    0
  /dev/sda1      pve lvm2 a--    <2,00t    0
root@nuc:/home/philippe# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  pve   2  16   0 wz--n- <2,91t    0
root@nuc:/home/philippe# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz--   2,79t             25,31  3,99
  root          pve -wi-ao----  96,00g
  swap          pve -wi-ao----   8,00g
  vm-100-disk-0 pve Vwi-aotz--  15,00g data        73,12
  vm-100-disk-1 pve Vwi-aotz-- 650,00g data        91,23
  vm-100-disk-2 pve Vwi-aotz--   5,50g data        27,26
  vm-101-disk-0 pve Vwi-aotz--  15,00g data        40,67
  vm-102-disk-0 pve Vwi-aotz--  10,00g data        53,51
  vm-102-disk-1 pve Vwi-aotz-- 100,00g data        62,50
  vm-103-disk-1 pve Vwi-aotz--  10,00g data        97,70
  vm-104-disk-0 pve Vwi-aotz--  15,00g data        25,95
  vm-105-disk-0 pve Vwi-aotz--   4,00g data        69,59
  vm-106-disk-0 pve Vwi-a-tz--  15,00g data        43,36
  vm-107-disk-1 pve Vwi-aotz--  15,00g data        99,76
  vm-108-disk-0 pve Vwi-aotz--   4,00g data        99,24
  vm-109-disk-0 pve Vwi-aotz--   4,00g data        60,22
I assume it is the /dev/sda1 device?

Can you check whether there are any LVs still on that device via:
pvdisplay /dev/sda1

I think it would make sense to create a new VG for that disk in general, since it seems to be an HDD / SSD rather than an NVME. So it would make sense to also create a separate VG.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!