Resize LXC DISK on Proxmox

Drthrax74

Active Member
Apr 22, 2019
34
3
28
34
Hello,
I have an 8 GB container which was created under LXC but I cannot resize it so that it is smaller. I want it to be 5 GB.

I also install Proxmox in EXT4 without ZFS support.






1587744507368.png


Code:
root@Proxmox:~# pct resize 105 rootfs 3G
unable to shrink disk size
 

Attachments

  • 1587744479653.png
    1587744479653.png
    116.9 KB · Views: 54
Hi,

shrinking a disk must be done manually.
You can loose your data so I recommand you to make a backup before.
first, shrink the fs in your container.
resize2fs will help you.
The next step would be to shrink the lv.
 
Hi,

shrinking a disk must be done manually.
You can loose your data so I recommand you to make a backup before.
first, shrink the fs in your container.
resize2fs will help you.
The next step would be to shrink the lv.

Could you please elaborate on this, or better yet - point to some pve-documentation on decreasing disk size?
 
Nevermind, found out how.
Documenting it here for posterity. :)

On your proxmox node, do this.

List the containers:
pct list

Stop the particular container you want to resize:
pct stop 999

Find out it's path on the node:
lvdisplay | grep "LV Path\|LV Size"

For good measure one can run a file system check:
e2fsck -fy /dev/pve/vm-999-disk-0

Resize the file system:
resize2fs /dev/pve/vm-999-disk-0 10G

Resize the local volume
lvreduce -L 10G /dev/pve/vm-999-disk-0

Edit the container's conf, look for the rootfs line and change accordingly:
nano /etc/pve/lxc/999.conf

rootfs: local-lvm:vm-999-disk-0,size=32G >> rootfs: local-lvm:vm-999-disk-0,size=10G

Start it:
pct start 999

Enter it and check the new size:
pct enter 999
df -h
 
Hello,

First of all thank you for your good advice, It would be worth making scripts to do the actions in an automated way.
 
Last edited:
Hello,

First of all thank you for your good advice, It would be worth making scripts to do the actions in an automated way.

From my personal point of view scripted disk reductions are iffy, as things may go wrong.
I prefer to do this manually and reviewing each step before I continue to the next step.

This is Dangerous Stuff(tm). :)
 
  • Like
Reactions: Darkk
Nevermind, found out how.
Documenting it here for posterity. :)

On your proxmox node, do this.

List the containers:
pct list

Stop the particular container you want to resize:
pct stop 999

Find out it's path on the node:
lvdisplay | grep "LV Path\|LV Size"

For good measure one can run a file system check:
e2fsck -fy /dev/pve/vm-999-disk-0

Resize the file system:
resize2fs /dev/pve/vm-999-disk-0 10G

Resize the local volume
lvreduce -L 10G /dev/pve/vm-999-disk-0

Edit the container's conf, look for the rootfs line and change accordingly:
nano /etc/pve/lxc/999.conf

rootfs: local-lvm:vm-999-disk-0,size=32G >> rootfs: local-lvm:vm-999-disk-0,size=10G

Start it:
pct start 999

Enter it and check the new size:
pct enter 999
df -h

Thanks adrian, nice one!
I had a warning after the resize when checking the volume (lvdisplay -v /dev/pve/vm-100-disk-0) that the LV size was larger than allocated:

Code:
WARNING: LV pve/vm-100-disk-0 maps 7.93 GiB while the size is only 6.00 GiB

So I mounted the volume and ran a trim:

Code:
mkdir /tmp/100
mount /dev/pve/vm-100-disk-0 /tmp/100
fstrim -v /tmp/100

Not sure if this is the best way to do it, but it works :)
 
  • Like
Reactions: adrian_vg
Is funny that the wuestion was about LXC resize and not VM resize, also the disk if it is a Ceph one the procedure is something and if it inside lvm-local is comething completly different.

The lazyone works doubled
 
Thanks for the nice description. My pve has btrfs so as hint for others the different file location as it was not so easy to find ;)
Example:
Code:
/var/lib/pve/local-btrfs/images/999/vm-999-disk-0/disk.raw
 
hello
i can do the resize... and it works somehow... but after the lxc containter want start anymore... what can i do
 
Same as you it didn't work when starting the container
you can restore your backup if you have a backup (this is my case)
 
Hello,

due to carelessness I have enlarged an lxc disk by 1000GB instead of 1000MB.
Thanks to @adrian_vg and @hotelrwanda for the help, but my problem is not completely solved yet.

lvdisplay
Code:
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                sdYhrD-fTyI-gRsl-BUG0-bReC-9spW-Tkisa0
  LV Write Access        read/write
  LV Creation host, time pve0, 2023-11-02 23:47:36 +0100
  LV Pool name           data
  WARNING: LV pve/vm-100-disk-0 maps 24.04 GiB while the size is only 15.00 GiB.
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Mapped size            100.00%
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:6

df -h on guest
Code:
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/pve-vm--100--disk--0   15G  4.8G  9.3G  34% /

Does anyone have any ideas what else I can do?
 
I don't get what the actual problem is?
The Warning LV pve/vm-100-disk-0 maps 24.04 GiB while the size is only 15.00 GiB in lvdisplay and when doing a backup WARNING: Thin volume pve/vm-100-disk-0 maps 25833373696 while the size is only 16106127360.
 
The Warning LV pve/vm-100-disk-0 maps 24.04 GiB while the size is only 15.00 GiB in lvdisplay and when doing a backup WARNING: Thin volume pve/vm-100-disk-0 maps 25833373696 while the size is only 16106127360.
Oh sorry, I missed that.

Have you recorded what commands you ran und what ouput it generated?
 
Oh sorry, I missed that.

Have you recorded what commands you ran und what ouput it generated?
No problem ;)
Unfortunately I no longer have the output. I executed the commands as described by adrian_vg and hotelrwanda, there was no error message. It partially worked - from 1TB to 24GB
 
Code:
root@pve0:~# mount /dev/pve/vm-100-disk-0 /tmp/100
root@pve0:~# fstrim -v /tmp/100
/tmp/100: 9.9 GiB (10595639296 bytes) trimmed
root@pve0:~# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                pve
  LV UUID                sdYhrD-fTyI-gRsl-BUG0-bReC-9spW-Tkisa0
  LV Write Access        read/write
  LV Creation host, time pve0, 2023-11-02 23:47:36 +0100
  LV Pool name           data
  WARNING: LV pve/vm-100-disk-0 maps 24.01 GiB while the size is only 15.00 GiB.
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Mapped size            100.00%
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:6
It changed from 24.04 GiB to 24.01 GiB
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!