Container disk size

M&M

New Member
Jun 14, 2025
3
0
1
Hello,

I've recently increased the disk size of one of my Proxmox containers. I made a typo and now under the resources my rootdisk has a "size=116936994193", i would like it to display for example 125G. I'm confused as to how to restore this notation (by increasing the disk space?). When trying to convert GiB to GB etc i'm thus far unable to understand how to fix this. I've seen that i can change the disksize inside the /etc/pve/lxc/10*.conf but i recon this would mess things up.

Could someone nudge me in the right direction?
 
The GUI does not support shrinking as it can be dangerous. If you have snapshots or backups I'd recommend you restore the latest one.
Please share this so I can give a more detailed answer
Bash:
qm config CTIDHERE --current
cat /etc/pve/storage.cfg
 
Last edited:
Hi, the storage type is LVM-Thin.

root@pve1-lwd1:~# qm config 100 --current
Configuration file 'nodes/pve1-lwd1/qemu-server/100.conf' does not exist
<could this be because its a container and not a virtual machine?>
root@pve1-lwd1:/etc/pve/nodes/pve1-lwd1/qemu-server# cat /etc/pve/lxc/100.conf
arch: amd64
cores: 6
features: nesting=1
<omitted>
memory: 13312
mp0: datastore3.hdd:vm-100-disk-0,mp=/archive,size=211061135376
<omitted>
onboot: 1
ostype: debian
protection: 1
rootfs: datastore1.ssd:vm-100-disk-0,size=116936994193
<omitted>
swap: 15360
tags: ct;debian-12
unprivileged: 1

root@pve1-lwd1:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content images,backup,rootdir,iso,vztmpl
shared 0

dir: datastore2.hdd
path /mnt/pve/datastore2.hdd
content rootdir,iso
is_mountpoint 1
nodes pve1-lwd1
shared 0

lvmthin: datastore1.ssd
thinpool datastore1.ssd
vgname datastore1.ssd
content images,rootdir
nodes pve1-lwd1


lvmthin: datastore3.hdd
thinpool datastore3.hdd
vgname datastore3.hdd
content rootdir,images
nodes pve1-lwd1
 
Last edited:
I can't give you specific commands without seeing ls -l /dev/mapper/*vm* and I want you to take a snapshot/backup first before attempting this.
Then check the actual used size inside the CT with df -h and add at least a few gigabytes or so. Do not shrink the file system lower than that.
Shut down the CT and try this
Bash:
pct shutdown 100

# Find CT disk(s)
ls -l /dev/mapper/*vm*

e2fsck -vf /dev/mapper/datastore1.ssd-vm--100--disk--0
resize2fs /dev/mapper/datastore1.ssd-vm--100--disk--0 XXXG
pct rescan
Replace the X with name/size. pct rescan is supposed to update the size but it didn't work in my test.
It's just visual and you can edit it yourself in the config file or extend the disk again a little bit.
 
Last edited:
Hi Impact, thank you for this information!

CT disk:
root@pve1-lwd1:~# ls -l /dev/mapper/datastore1.ssd-vm--100--disk--0
lrwxrwxrwx 1 root root 8 Jun 14 11:40 /dev/mapper/datastore1.ssd-vm--100--disk--0 -> ../dm-15

Will try asap.
 
I edited above. I can't tell what a safe size would be without the df -h from inside the CT but that should be trivial to figure out yourself ;)