Hi,
shrinking a disk must be done manually.
You can loose your data so I recommand you to make a backup before.
first, shrink the fs in your container.
resize2fs will help you.
The next step would be to shrink the lv.

Hello,
First of all thank you for your good advice, It would be worth making scripts to do the actions in an automated way.

Nevermind, found out how.
Documenting it here for posterity.
On your proxmox node, do this.
List the containers:
pct list
Stop the particular container you want to resize:
pct stop 999
Find out it's path on the node:
lvdisplay | grep "LV Path\|LV Size"
For good measure one can run a file system check:
e2fsck -fy /dev/pve/vm-999-disk-0
Resize the file system:
resize2fs /dev/pve/vm-999-disk-0 10G
Resize the local volume
lvreduce -L 10G /dev/pve/vm-999-disk-0
Edit the container's conf, look for the rootfs line and change accordingly:
nano /etc/pve/lxc/999.conf
rootfs: local-lvm:vm-999-disk-0,size=32G >> rootfs: local-lvm:vm-999-disk-0,size=10G
Start it:
pct start 999
Enter it and check the new size:
pct enter 999
df -h
WARNING: LV pve/vm-100-disk-0 maps 7.93 GiB while the size is only 6.00 GiBmkdir /tmp/100
mount /dev/pve/vm-100-disk-0 /tmp/100
fstrim -v /tmp/100
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                sdYhrD-fTyI-gRsl-BUG0-bReC-9spW-Tkisa0
  LV Write Access        read/write
  LV Creation host, time pve0, 2023-11-02 23:47:36 +0100
  LV Pool name           data
  WARNING: LV pve/vm-100-disk-0 maps 24.04 GiB while the size is only 15.00 GiB.
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Mapped size            100.00%
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:6Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/pve-vm--100--disk--0   15G  4.8G  9.3G  34% /I don't get what the actual problem is?Does anyone have any ideas what else I can do?
The WarningI don't get what the actual problem is?
LV pve/vm-100-disk-0 maps 24.04 GiB while the size is only 15.00 GiB in lvdisplay and when doing a backup  WARNING: Thin volume pve/vm-100-disk-0 maps 25833373696 while the size is only 16106127360.Oh sorry, I missed that.The WarningLV pve/vm-100-disk-0 maps 24.04 GiB while the size is only 15.00 GiBin lvdisplay and when doing a backupWARNING: Thin volume pve/vm-100-disk-0 maps 25833373696 while the size is only 16106127360.
No problemOh sorry, I missed that.
Have you recorded what commands you ran und what ouput it generated?

root@pve0:~# mount /dev/pve/vm-100-disk-0 /tmp/100
root@pve0:~# fstrim -v /tmp/100
/tmp/100: 9.9 GiB (10595639296 bytes) trimmed
root@pve0:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                sdYhrD-fTyI-gRsl-BUG0-bReC-9spW-Tkisa0
  LV Write Access        read/write
  LV Creation host, time pve0, 2023-11-02 23:47:36 +0100
  LV Pool name           data
  WARNING: LV pve/vm-100-disk-0 maps 24.01 GiB while the size is only 15.00 GiB.
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Mapped size            100.00%
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:6We use essential cookies to make this site work, and optional cookies to enhance your experience.
 
	