how to add and/or resize a LXC disk

linum

Renowned Member
Sep 25, 2011
99
3
73
I just upgraded my test system to proxmox 4 and created a LXC container. It looks fine but how can I add another disk and/or resize the current root disk?
 
Hi,
at the moment you have to this on command line.
Code:
pct resize <vmid> <disk> <size> [OPTIONS]
 
Ok, that's the resize part.

But how to add another virtual disk or even a existing block device seems to be much more complicated. With "pct --help" I didn't see anything that seems to be related to this topic. And asking google seems to bring up some solutions but they require bind mounts, hooks with scripts or something else that doesn't looks like a second disk is something that is recommend with LXC containers. Or maybe I'm missing something.
 
Hi,
at the moment you have to this on command line.
Code:
pct resize <vmid> <disk> <size> [OPTIONS]

# pct resize 212 rootfs 15G
Could not get zvol size at /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm line 308.

I resized that by hand:
zfs set quota=15G subvol-212-disk-1
zfs set refquota=15G subvol-212-disk-1

and it works.
 
hi yarii,
this is a bug.
I fixed it and will be soon avalible
 
I'm also using ZFS and LXC containers, i tried with "pct resize 106 rootfs 80gb" (from 200gb)

But i get: "unable to shrink disk size"

So i tried with:
zfs set quota=80G subvol-106-disk-1
zfs set refquota=80G subvol-106-disk-1
And changed "/etc/pve/nodes/server3/lxc/106.conf" to 80GB

Worked fine.

Since this issue is old, is there still an issue perhaps?

proxmox-ve: 4.1-39 (running kernel: 4.2.8-1-pve)
pve-manager: 4.1-15 (running version: 4.1-15/8cd55b52)
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-4.2.8-1-pve: 4.2.8-39
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-33
qemu-server: 4.0-62
pve-firmware: 1.1-7
libpve-common-perl: 4.0-49
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-42
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-8
pve-container: 1.0-46
pve-firewall: 2.0-18
pve-ha-manager: 1.0-23
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 2.0.0-pve1
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve7~jessie
 
Can confirm that the same error occurs with LVM backed containers. Manual (offline) resize works fine -

e2fsck -fy /dev/pve-store/vm-105-disk-2
resize2fs /dev/pve-store/vm-105-disk-2 2G
lvreduce -L 3G /dev/pve-store/vm-105-disk-2
resize2fs /dev/pve-store/vm-105-disk-2

With edit to /etc/pve/lxc/105.conf to correct size.
 
Shrinking is not supported and have to be done manual or with backup and restore.
 
Has this been addressed in version 4.0.28? I tried pct resize and got the same perl error as above.
Wasn't resizing an option through the GUI at one point?
 
You can resize disk images and containers FS, but only*increase* them.
We haven't implemented shrinking at that point, the risk of shooting yourself in the foot is too big, especially for virtual machines.

For container, when restoring a backup, you have the possibility of setting the rootfs size, and this is the recommend way to setup a smaller FS.
 
You can resize disk images and containers FS, but only*increase* them.
We haven't implemented shrinking at that point, the risk of shooting yourself in the foot is too big, especially for virtual machines.

For container, when restoring a backup, you have the possibility of setting the rootfs size, and this is the recommend way to setup a smaller FS.

Manu, that is clear. What is the official process for extending LXC filesystems if 'pct resize' isn't working? I've seen a number of potential solutions using various system-level tools, including zfs' management commands. Is this what Proxmox is saying we have to use? Are staff aware that pct resize isn't working under 4.0.x? Is there a timeline to get that addressed? Thanks.
 
Manu, that is clear. What is the official process for extending LXC filesystems if 'pct resize' isn't working? I've seen a number of potential solutions using various system-level tools, including zfs' management commands. Is this what Proxmox is saying we have to use? Are staff aware that pct resize isn't working under 4.0.x? Is there a timeline to get that addressed? Thanks.

"pct resize" is working just fine - within the described limits (growing only). if it does not for you, please post the exact error message, your container and storage configuration and the output of "pveversion -v". thanks!
 
Fabian,
Here is the output:

root@proxmox:/home/mikec# pct resize 104 rootfs 400G
Could not get zvol size at /usr/share/perl5/PVE/Storage/ZFSPoolPlugin.pm line 308.

Code:
proxmox-ve: 4.0-16 (running kernel: 4.2.2-1-pve)
pve-manager: 4.0-48 (running version: 4.0-48/0d8559d0)
pve-kernel-4.2.2-1-pve: 4.2.2-16
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-22
qemu-server: 4.0-30
pve-firmware: 1.1-7
libpve-common-perl: 4.0-29
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-25
pve-libspice-server1: 0.12.5-1
vncterm: 1.2-1
pve-qemu-kvm: 2.4-9
pve-container: 1.0-6
pve-firewall: 2.0-12
pve-ha-manager: 1.0-9
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.3-1
lxcfs: 0.9-pve2
cgmanager: 0.37-pve2
criu: 1.6.0-1
zfsutils: 0.6.5-pve4~jessie

zfs list:
rpool/subvol-104-disk-1 243G 6.58G 243G /rpool/subvol-104-disk-1

storage.cfg:
root@proxmox:/etc/pve# cat storage.cfg
zfspool: zfs-local
pool rpool
content images,rootdir

dir: local
path /var/lib/vz
content images,iso,backup,rootdir,vztmpl
maxfiles 0

104.conf:
Code:
#Plex media server on Debian8
arch: amd64
cpulimit: 2
cpuunits: 1024
hostname: plex.aviate.org
memory: 4000
net0: bridge=vmbr0,gw=192.168.1.1,hwaddr=9A:63:CF:3E:92:1C,ip=192.168.1.100/24,ip6=auto,name=eth0,type=veth
ostype: ubuntu
rootfs: zfs-local:subvol-104-disk-1,size=250G
swap: 4000
 
Okay, I withdraw this question. I've gone ahead and upgraded to 4.4.80 and now I see the resize disk option is back in the GUI.
Thanks for your help, all.
 
I'm using PVE 6.2-4. Trying to shrink the disk size. Here is the output. Please help.

Code:
root@pve:/# pct resize 101 rootfs 8G
unable to shrink disk size
 
If you run your containers on a zfs-based storage, the following is unofficial, but verified to work (as always, do a backup before...)
Code:
vzdump 100 --mode snapshot
zfs set refquota=8G rpool/data/subvol-100-disk-1
cat << EOF > 100.conf.diff
--- lxc/100.conf.orig  2021-01-03 12:58:03.000000000 +0100
+++ lxc/100.conf       2021-01-03 15:03:54.000000000 +0100
@@ -2,7 +2,7 @@
 cores: 1
 hostname: lcoovhdolp002
 memory: 2048
-mp0: local-zfs:subvol-100-disk-1,mp=/var/lib/dolibarr,backup=1,size=88G
+mp0: local-zfs:subvol-100-disk-1,mp=/var/lib/dolibarr,backup=1,size=8G
 nameserver: 94.23.248.5 213.186.33.99
 net0: name=eth0,bridge=vmbr0,gw=94.23.248.254,gw6=2001:41d0:2:7bff:ff:ff:ff:ff,hwaddr=02:00:00:2d:95:91,ip=87.98.183.98/32,ip6=2001:41d0:2:7b05::1/128,type=veth
 onboot: 1
EOF
patch -p0 lxc/100.conf 100.conf.diff
Note : in my case quota property was unset, so I chose not to modify it.
 
  • Like
Reactions: alexdelprete

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!