Reduce size of local-lvm

coldfire7

Member
Sep 28, 2020
11
6
23
40
127.0.0.0/8
I want to know if there is any way to reduce size of the local-lvm? I would like to resize it and keep 100GB unallocated at the end of the SSD.

N498cHy.png


Bash:
> df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                  7.8G     0  7.8G   0% /dev
tmpfs                 1.6G   18M  1.6G   2% /run
/dev/mapper/pve-root   94G   13G   77G  15% /
tmpfs                 7.9G   40M  7.8G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/nvme0n1p2        511M  312K  511M   1% /boot/efi
/dev/fuse              30M   36K   30M   1% /etc/pve
tmpfs                 1.6G     0  1.6G   0% /run/user/0
> pvs
  PV             VG  Fmt  Attr PSize    PFree
  /dev/nvme0n1p3 pve lvm2 a--  <953.37g <16.00g
> vgs
  VG  #PV #LV #SN Attr   VSize    VFree
  pve   1  16   0 wz--n- <953.37g <16.00g
> lvs
  LV            VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data          pve twi-aotz-- <816.70g             16.08  0.96                          
  root          pve -wi-ao----   96.00g                                                  
  swap          pve -wi-ao----    8.00g                                                  
  vm-100-disk-0 pve Vwi-aotz--   30.00g data        5.49                                  
  vm-101-disk-0 pve Vwi-a-tz--  120.00g data        6.24                                  
  vm-102-disk-0 pve Vwi-a-tz--  120.00g data        5.89                                  
  vm-108-disk-0 pve Vwi-aotz--  130.00g data        51.10                                
  vm-200-disk-0 pve Vwi-aotz--   30.00g data        36.64                                
  vm-201-disk-0 pve Vwi-aotz--   30.00g data        12.59                                
  vm-202-disk-0 pve Vwi-aotz--   30.00g data        64.61                                
  vm-203-disk-0 pve Vwi-aotz--   30.00g data        12.79                                
  vm-204-disk-0 pve Vwi-aotz--   10.00g data        16.16                                
  vm-205-disk-0 pve Vwi-a-tz--   15.00g data        21.02                                
  vm-208-disk-0 pve Vwi-a-tz--   10.00g data        13.87                                
  vm-209-disk-0 pve Vwi-a-tz--   10.00g data        13.89                                
  vm-900-disk-0 pve Vwi-a-tz--   20.00g data        15.78                                
> lsblk
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
nvme0n1                      259:0    0 953.9G  0 disk
├─nvme0n1p1                  259:1    0  1007K  0 part
├─nvme0n1p2                  259:2    0   512M  0 part /boot/efi
└─nvme0n1p3                  259:3    0 953.4G  0 part
  ├─pve-swap                 253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root                 253:1    0    96G  0 lvm  /
  ├─pve-data_tmeta           253:2    0   8.3G  0 lvm
  │ └─pve-data-tpool         253:4    0 816.7G  0 lvm
  │   ├─pve-data             253:5    0 816.7G  0 lvm
  │   ├─pve-vm--108--disk--0 253:6    0   130G  0 lvm
  │   ├─pve-vm--202--disk--0 253:7    0    30G  0 lvm
  │   ├─pve-vm--201--disk--0 253:8    0    30G  0 lvm
  │   ├─pve-vm--200--disk--0 253:9    0    30G  0 lvm
  │   ├─pve-vm--101--disk--0 253:10   0   120G  0 lvm
  │   ├─pve-vm--102--disk--0 253:11   0   120G  0 lvm
  │   ├─pve-vm--900--disk--0 253:12   0    20G  0 lvm
  │   ├─pve-vm--100--disk--0 253:13   0    30G  0 lvm
  │   ├─pve-vm--205--disk--0 253:14   0    15G  0 lvm
  │   ├─pve-vm--204--disk--0 253:15   0    10G  0 lvm
  │   ├─pve-vm--203--disk--0 253:16   0    30G  0 lvm
  │   ├─pve-vm--208--disk--0 253:17   0    10G  0 lvm
  │   └─pve-vm--209--disk--0 253:18   0    10G  0 lvm
  └─pve-data_tdata           253:3    0 816.7G  0 lvm
    └─pve-data-tpool         253:4    0 816.7G  0 lvm
      ├─pve-data             253:5    0 816.7G  0 lvm
      ├─pve-vm--108--disk--0 253:6    0   130G  0 lvm
      ├─pve-vm--202--disk--0 253:7    0    30G  0 lvm
      ├─pve-vm--201--disk--0 253:8    0    30G  0 lvm
      ├─pve-vm--200--disk--0 253:9    0    30G  0 lvm
      ├─pve-vm--101--disk--0 253:10   0   120G  0 lvm
      ├─pve-vm--102--disk--0 253:11   0   120G  0 lvm
      ├─pve-vm--900--disk--0 253:12   0    20G  0 lvm
      ├─pve-vm--100--disk--0 253:13   0    30G  0 lvm
      ├─pve-vm--205--disk--0 253:14   0    15G  0 lvm
      ├─pve-vm--204--disk--0 253:15   0    10G  0 lvm
      ├─pve-vm--203--disk--0 253:16   0    30G  0 lvm
      ├─pve-vm--208--disk--0 253:17   0    10G  0 lvm
      └─pve-vm--209--disk--0 253:18   0    10G  0 lvm
 
Last edited:
Hi,
LVM does not support reducing thin pools in size yet. What you'd have to do is, move all volumes in the thin pool somewhere else and recreate the thin pool with smaller size.
 
Hi,
LVM does not support reducing thin pools in size yet. What you'd have to do is, move all volumes in the thin pool somewhere else and recreate the thin pool with smaller size.
I guess I can just shutdown all the VMs and CTs and take backup of them.
Is there a guide I can follow to delete and re-create the thin pool?
 
Please also check that the backups work before proceeding to re-create the thin pool! What you can do is:
Code:
lvremove pve/data
lvcreate -L<pool size>G -ndata pve
lvconvert --type thin-pool --poolmetadatasize <metadata size>G pve/data
PVE normally uses 1% of the pool size for the metadatasize, but at least 1G.
 
First...if you don't already know how to do backups you should be really worried. You should always have recent backups, not just for wiping storages.

Easiest would be to use VZDump where you just need some free disk/networkshare to be used as a backup storage. Can all be done using the webUI.
Way better would be to do backups using the Proxmox Backup Server but here you have to set up this new server first.
 
I'm noobie with Proxmox, I used to use Virtualbox.

I installed it a week ago in 1GB ssd with default config.

I know how to do backup and restore a VM, but I don't know how to export all VMs to an external storage, rebuild the partitions and then restore them and make them work :)

I'll pay attention to your comments and I'll try to do it by myself. It's a pity I have 700Gb free in Local-lvm and no space in Local for VM backups

I thank you your fast response.

Regards,
Carlos.
 
I installed it a week ago in 1GB ssd with default config.

I know how to do backup and restore a VM, but I don't know how to export all VMs to an external storage, rebuild the partitions and then restore them and make them work :)
I'll pay attention to your comments and I'll try to do it by myself. It's a pity I have 700Gb free in Local-lvm and no space in Local for VM backups
Backups of VMs stored on the same disk as your VMs don't count as backups. SSDs are consumables wearing with each write and very often fail without any warning. So not unlikely that you wake up tomorrow and your SSD is completely dead with all your VMs and all your backups on it so everthing is lost.

With default configs and "local" als backup storage your backups archives are stored in "/var/lib/vz/dump". Copy them to a NAS, USB disk or whatever.

In your position I would at least buy another disk dedicated for backups and then also backup the "/etc" folder regularly.
 
  • Like
Reactions: cryptonym64
Hi,
LVM does not support reducing thin pools in size yet. What you'd have to do is, move all volumes in the thin pool somewhere else and recreate the thin pool with smaller size.

Hi , follow-up question, as this answer is more than 3 years old - is it possible to reduce LVM-thin pools in size today?
I would like to free up some unused space on my system drive to use it for storing data and passing it through to my OMV NAS VM - is that possible in the meantime or do I still need to move all volumes in the thin pool somewhere else and recreate the thin pool with smaller size (as you stated in 2020) ? Thank you!
 
Please also check that the backups work before proceeding to re-create the thin pool! What you can do is:
Code:
lvremove pve/data
lvcreate -L<pool size>G -ndata pve
lvconvert --type thin-pool --poolmetadatasize <metadata size>G pve/data
PVE normally uses 1% of the pool size for the metadatasize, but at least 1G.
Thank you, still works with V 8.4.1
Shrinked my LOCAL-LVM from 1.000GB to 400GB, and still 70% unused....
 
Thank you, still works with V 8.4.1
Shrinked my LOCAL-LVM from 1.000GB to 400GB, and still 70% unused....

Code:
root@pve:~# pvs
  PV             VG  Fmt  Attr PSize    PFree   
  /dev/nvme0n1p3 pve lvm2 a--  <930.51g <418.51g

How can I use the memory space that has now been freed up?
I would like to mount the free memory in a directory and make it available to my LXCs via BindMountpoint.
 
Hi,
Code:
root@pve:~# pvs
  PV             VG  Fmt  Attr PSize    PFree  
  /dev/nvme0n1p3 pve lvm2 a--  <930.51g <418.51g

How can I use the memory space that has now been freed up?
I would like to mount the free memory in a directory and make it available to my LXCs via BindMountpoint.
you could create a new logical volume in the volume group (not in the thin pool), format that with a file system and mount it.
 
  • Like
Reactions: Bartimaus
Hi,

you could create a new logical volume in the volume group (not in the thin pool), format that with a file system and mount it.

Code:
--- Logical volume ---
  LV Path                /dev/pve/cache
  LV Name                cache
  VG Name                pve
  LV UUID                7YdxYL-3x1F-7QB2-y8lQ-aDlN-2tmT-Gr0w6Z
  LV Write Access        read/write
  LV Creation host, time pve, 2025-05-07 12:32:00 +0200
  LV Pool name           data
  LV Status              available
  # open                 1
  LV Size                418.51 GiB
  Mapped size            1.81%
  Current LE             107139
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:26

Thx, works ;)
 
Hi

what I did:
  • Installed Proxmox VE 9 on a new 4TB NVMe SSD.
  • During setup, reduced the default thin pool (pve/data) from 1 TB → 400 GB.
  • Used the remaining space to create a new logical volume (LV).
  • Made a filesystem on that LV and mounted it at /mnt/cache.

BUT:

When copying data to /mnt/cache, the “Thin Pool free space” in the web UI shrinks at the same rate. :oops:

Where did I went wrong ?

PVS:
root@pve:~# pvs
PV VG Fmt Attr PSize PFree
/dev/nvme0n1p3 pve lvm2 a-- <3.73t 0
LVS
root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
cache pve Vwi-aotz-- 3.32t data 3.67
data pve twi-aotz-- 400.00g 39.87 1.77
root pve -wi-ao---- <3.32t
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-aotz-- 2.00g data 55.96
vm-101-disk-0 pve Vwi-aotz-- 12.00g data 33.68
vm-102-disk-0 pve Vwi-aotz-- 2.00g data 55.29
vm-107-disk-0 pve Vwi-aotz-- 8.00g data 43.04
vm-110-disk-0 pve Vwi-aotz-- 11.00g data 94.18
vm-112-disk-0 pve Vwi-aotz-- 820.00m data 42.45
vm-115-disk-0 pve Vwi-aotz-- 11.00g data 94.69
vm-116-disk-0 pve Vwi-aotz-- 8.00g data 26.71
vm-119-disk-0 pve Vwi-aotz-- 2.00g data 83.11
vm-300-disk-0 pve Vwi-a-tz-- 32.00g data 0.00
VGS
root@pve:~# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 14 0 wz--n- <3.73t 0
lsblk
root@pve:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 7.3T 0 disk
└─sda1 8:1 0 7.3T 0 part /mnt/8TBSSD
sdb 8:16 0 3.6T 0 disk
└─sdb1 8:17 0 3.6T 0 part /mnt/4TBSATA
nvme0n1 259:0 0 3.7T 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 1G 0 part /boot/efi
└─nvme0n1p3 259:3 0 3.7T 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 3.3T 0 lvm /
├─pve-data_tmeta 252:2 0 4G 0 lvm
│ └─pve-data-tpool 252:4 0 400G 0 lvm
│ ├─pve-data 252:5 0 400G 1 lvm
│ ├─pve-cache 252:6 0 3.3T 0 lvm /mnt/cache
│ ├─pve-vm--107--disk--0 252:7 0 8G 0 lvm
│ ├─pve-vm--110--disk--0 252:8 0 11G 0 lvm
│ ├─pve-vm--116--disk--0 252:9 0 8G 0 lvm
│ ├─pve-vm--102--disk--0 252:10 0 2G 0 lvm
│ ├─pve-vm--119--disk--0 252:11 0 2G 0 lvm
│ ├─pve-vm--115--disk--0 252:12 0 11G 0 lvm
│ ├─pve-vm--112--disk--0 252:13 0 820M 0 lvm
│ ├─pve-vm--100--disk--0 252:14 0 2G 0 lvm
│ ├─pve-vm--101--disk--0 252:15 0 12G 0 lvm
│ └─pve-vm--300--disk--0 252:16 0 32G 0 lvm
└─pve-data_tdata 252:3 0 400G 0 lvm
└─pve-data-tpool 252:4 0 400G 0 lvm
├─pve-data 252:5 0 400G 1 lvm
├─pve-cache 252:6 0 3.3T 0 lvm /mnt/cache
├─pve-vm--107--disk--0 252:7 0 8G 0 lvm
├─pve-vm--110--disk--0 252:8 0 11G 0 lvm
├─pve-vm--116--disk--0 252:9 0 8G 0 lvm
├─pve-vm--102--disk--0 252:10 0 2G 0 lvm
├─pve-vm--119--disk--0 252:11 0 2G 0 lvm
├─pve-vm--115--disk--0 252:12 0 11G 0 lvm
├─pve-vm--112--disk--0 252:13 0 820M 0 lvm
├─pve-vm--100--disk--0 252:14 0 2G 0 lvm
├─pve-vm--101--disk--0 252:15 0 12G 0 lvm
└─pve-vm--300--disk--0 252:16 0 32G 0 lvm
df -hT
root@pve:~# df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 7.5G 0 7.5G 0% /dev
tmpfs tmpfs 1.6G 2.8M 1.6G 1% /run
/dev/mapper/pve-root ext4 94G 10G 80G 12% /
tmpfs tmpfs 7.6G 34M 7.5G 1% /dev/shm
efivarfs efivarfs 128K 12K 112K 10% /sys/firmware/efi/efivars
tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-journald.service
tmpfs tmpfs 7.6G 0 7.6G 0% /tmp
/dev/nvme0n1p2 vfat 1022M 8.8M 1014M 1% /boot/efi
/dev/mapper/pve-cache ext4 3.3T 97G 3.1T 4% /mnt/cache
/dev/sda1 ext4 7.3T 3.6T 3.3T 53% /mnt/8TBSSD
/dev/fuse fuse 128M 36K 128M 1% /etc/pve
/dev/sdb1 ext4 3.6T 3.0T 487G 87% /mnt/4TBSATA
tmpfs tmpfs 1.0M 0 1.0M 0% /run/credentials/getty@tty1.service
tmpfs tmpfs 1.6G 4.0K 1.6G 1% /run/user/0
 
Last edited: