Reduce size of local-lvm

coldfire7

Member
Sep 28, 2020
11
6
23
40
127.0.0.0/8
I want to know if there is any way to reduce size of the local-lvm? I would like to resize it and keep 100GB unallocated at the end of the SSD.

N498cHy.png


Bash:
> df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  7.8G     0  7.8G   0% /dev
tmpfs                 1.6G   18M  1.6G   2% /run
/dev/mapper/pve-root   94G   13G   77G  15% /
tmpfs                 7.9G   40M  7.8G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/nvme0n1p2        511M  312K  511M   1% /boot/efi
/dev/fuse              30M   36K   30M   1% /etc/pve
tmpfs                 1.6G     0  1.6G   0% /run/user/0
> pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <953.37g <16.00g
> vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1  16   0 wz--n- <953.37g <16.00g
> lvs
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <816.70g             16.08  0.96                          
  root          pve -wi-ao----   96.00g                                                  
  swap          pve -wi-ao----    8.00g                                                  
  vm-100-disk-0 pve Vwi-aotz--   30.00g data        5.49                                  
  vm-101-disk-0 pve Vwi-a-tz--  120.00g data        6.24                                  
  vm-102-disk-0 pve Vwi-a-tz--  120.00g data        5.89                                  
  vm-108-disk-0 pve Vwi-aotz--  130.00g data        51.10                                
  vm-200-disk-0 pve Vwi-aotz--   30.00g data        36.64                                
  vm-201-disk-0 pve Vwi-aotz--   30.00g data        12.59                                
  vm-202-disk-0 pve Vwi-aotz--   30.00g data        64.61                                
  vm-203-disk-0 pve Vwi-aotz--   30.00g data        12.79                                
  vm-204-disk-0 pve Vwi-aotz--   10.00g data        16.16                                
  vm-205-disk-0 pve Vwi-a-tz--   15.00g data        21.02                                
  vm-208-disk-0 pve Vwi-a-tz--   10.00g data        13.87                                
  vm-209-disk-0 pve Vwi-a-tz--   10.00g data        13.89                                
  vm-900-disk-0 pve Vwi-a-tz--   20.00g data        15.78                                
> lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1                      259:0    0 953.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0   512M  0 part /boot/efi
└─nvme0n1p3                  259:3    0 953.4G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   8.3G  0 lvm
  │ └─pve-data-tpool         253:4    0 816.7G  0 lvm
  │   ├─pve-data             253:5    0 816.7G  0 lvm
  │   ├─pve-vm--108--disk--0 253:6    0   130G  0 lvm
  │   ├─pve-vm--202--disk--0 253:7    0    30G  0 lvm
  │   ├─pve-vm--201--disk--0 253:8    0    30G  0 lvm
  │   ├─pve-vm--200--disk--0 253:9    0    30G  0 lvm
  │   ├─pve-vm--101--disk--0 253:10   0   120G  0 lvm
  │   ├─pve-vm--102--disk--0 253:11   0   120G  0 lvm
  │   ├─pve-vm--900--disk--0 253:12   0    20G  0 lvm
  │   ├─pve-vm--100--disk--0 253:13   0    30G  0 lvm
  │   ├─pve-vm--205--disk--0 253:14   0    15G  0 lvm
  │   ├─pve-vm--204--disk--0 253:15   0    10G  0 lvm
  │   ├─pve-vm--203--disk--0 253:16   0    30G  0 lvm
  │   ├─pve-vm--208--disk--0 253:17   0    10G  0 lvm
  │   └─pve-vm--209--disk--0 253:18   0    10G  0 lvm
  └─pve-data_tdata           253:3    0 816.7G  0 lvm
    └─pve-data-tpool         253:4    0 816.7G  0 lvm
      ├─pve-data             253:5    0 816.7G  0 lvm
      ├─pve-vm--108--disk--0 253:6    0   130G  0 lvm
      ├─pve-vm--202--disk--0 253:7    0    30G  0 lvm
      ├─pve-vm--201--disk--0 253:8    0    30G  0 lvm
      ├─pve-vm--200--disk--0 253:9    0    30G  0 lvm
      ├─pve-vm--101--disk--0 253:10   0   120G  0 lvm
      ├─pve-vm--102--disk--0 253:11   0   120G  0 lvm
      ├─pve-vm--900--disk--0 253:12   0    20G  0 lvm
      ├─pve-vm--100--disk--0 253:13   0    30G  0 lvm
      ├─pve-vm--205--disk--0 253:14   0    15G  0 lvm
      ├─pve-vm--204--disk--0 253:15   0    10G  0 lvm
      ├─pve-vm--203--disk--0 253:16   0    30G  0 lvm
      ├─pve-vm--208--disk--0 253:17   0    10G  0 lvm
      └─pve-vm--209--disk--0 253:18   0    10G  0 lvm
 
Last edited:
Hi,
LVM does not support reducing thin pools in size yet. What you'd have to do is, move all volumes in the thin pool somewhere else and recreate the thin pool with smaller size.
 
Hi,
LVM does not support reducing thin pools in size yet. What you'd have to do is, move all volumes in the thin pool somewhere else and recreate the thin pool with smaller size.
I guess I can just shutdown all the VMs and CTs and take backup of them.
Is there a guide I can follow to delete and re-create the thin pool?
 
Please also check that the backups work before proceeding to re-create the thin pool! What you can do is:
Code:
lvremove pve/data
lvcreate -L<pool size>G -ndata pve
lvconvert --type thin-pool --poolmetadatasize <metadata size>G pve/data
PVE normally uses 1% of the pool size for the metadatasize, but at least 1G.
 
First...if you don't already know how to do backups you should be really worried. You should always have recent backups, not just for wiping storages.

Easiest would be to use VZDump where you just need some free disk/networkshare to be used as a backup storage. Can all be done using the webUI.
Way better would be to do backups using the Proxmox Backup Server but here you have to set up this new server first.
 
I'm noobie with Proxmox, I used to use Virtualbox.

I installed it a week ago in 1GB ssd with default config.

I know how to do backup and restore a VM, but I don't know how to export all VMs to an external storage, rebuild the partitions and then restore them and make them work :)

I'll pay attention to your comments and I'll try to do it by myself. It's a pity I have 700Gb free in Local-lvm and no space in Local for VM backups

I thank you your fast response.

Regards,
Carlos.
 
I installed it a week ago in 1GB ssd with default config.

I know how to do backup and restore a VM, but I don't know how to export all VMs to an external storage, rebuild the partitions and then restore them and make them work :)
I'll pay attention to your comments and I'll try to do it by myself. It's a pity I have 700Gb free in Local-lvm and no space in Local for VM backups
Backups of VMs stored on the same disk as your VMs don't count as backups. SSDs are consumables wearing with each write and very often fail without any warning. So not unlikely that you wake up tomorrow and your SSD is completely dead with all your VMs and all your backups on it so everthing is lost.

With default configs and "local" als backup storage your backups archives are stored in "/var/lib/vz/dump". Copy them to a NAS, USB disk or whatever.

In your position I would at least buy another disk dedicated for backups and then also backup the "/etc" folder regularly.
 
Hi,
LVM does not support reducing thin pools in size yet. What you'd have to do is, move all volumes in the thin pool somewhere else and recreate the thin pool with smaller size.

Hi , follow-up question, as this answer is more than 3 years old - is it possible to reduce LVM-thin pools in size today?
I would like to free up some unused space on my system drive to use it for storing data and passing it through to my OMV NAS VM - is that possible in the meantime or do I still need to move all volumes in the thin pool somewhere else and recreate the thin pool with smaller size (as you stated in 2020) ? Thank you!
 
Please also check that the backups work before proceeding to re-create the thin pool! What you can do is:
Code:
lvremove pve/data
lvcreate -L<pool size>G -ndata pve
lvconvert --type thin-pool --poolmetadatasize <metadata size>G pve/data
PVE normally uses 1% of the pool size for the metadatasize, but at least 1G.
Thank you, still works with V 8.4.1
Shrinked my LOCAL-LVM from 1.000GB to 400GB, and still 70% unused....
 
Thank you, still works with V 8.4.1
Shrinked my LOCAL-LVM from 1.000GB to 400GB, and still 70% unused....

Code:
root@pve:~# pvs
  PV             VG  Fmt  Attr PSize    PFree   
  /dev/nvme0n1p3 pve lvm2 a--  <930.51g <418.51g

How can I use the memory space that has now been freed up?
I would like to mount the free memory in a directory and make it available to my LXCs via BindMountpoint.
 
Hi,
Code:
root@pve:~# pvs
  PV             VG  Fmt  Attr PSize    PFree  
  /dev/nvme0n1p3 pve lvm2 a--  <930.51g <418.51g

How can I use the memory space that has now been freed up?
I would like to mount the free memory in a directory and make it available to my LXCs via BindMountpoint.
you could create a new logical volume in the volume group (not in the thin pool), format that with a file system and mount it.
 
  • Like
Reactions: Bartimaus
Hi,

you could create a new logical volume in the volume group (not in the thin pool), format that with a file system and mount it.

Code:
--- Logical volume ---
  LV Path                /dev/pve/cache
  LV Name                cache
  VG Name                pve
  LV UUID                7YdxYL-3x1F-7QB2-y8lQ-aDlN-2tmT-Gr0w6Z
  LV Write Access        read/write
  LV Creation host, time pve, 2025-05-07 12:32:00 +0200
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                418.51 GiB
  Mapped size            1.81%
  Current LE             107139
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:26

Thx, works ;)