Shrink disk size

I'm afraid I cannot help with that.
 
This was far too complicated.

1. Create a backup
2. Restore the backup with smaller size root disk:

Code:
pct restore 101 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs encrypted_zfs:100

where:
  • 101 is the new CT id (if you want to get it the old one, remove the container first and then use that number)
  • --rootfs encrypted_zfs:100 tells the restore to create a new root fs with size 100 GB;
  • encrypted_zfs is the volume, where the rootfs should be created; example is my encrypted zfs volume
  • you can check existing volumes with pvesm status
e.g.:
Code:
pvesm status
> Name                 Type     Status           Total            Used       Available        %
> backups               dir     active      7750838732      4888031492      2784655920   63.06%
> encrypted_zfs     zfspool     active       462932852         8350444       454582408    1.80%
> local                 dir     active        59600812        28260908        28282652   47.42%
> local-data            dir     active       170408272         1197444       160484916    0.70%
 
Last edited:
This was far too complicated.

1. Create a backup
2. Restore the backup with smaller size root disk:

Code:
pct restore 101 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs encrypted_zfs:100

where:
  • 101 is the new CT id (if you want to get it the old one, remove the container first and then use that number)
  • --rootfs encrypted_zfs:100 tells the restore to create a new root fs with size 100 GB;
  • encrypted_zfs is the volume, where the rootfs should be created; example is my encrypted zfs volume
  • you can check existing volumes with pvesm status
e.g.:
Code:
pvesm status
> Name                 Type     Status           Total            Used       Available        %
> backups               dir     active      7750838732      4888031492      2784655920   63.06%
> encrypted_zfs     zfspool     active       462932852         8350444       454582408    1.80%
> local                 dir     active        59600812        28260908        28282652   47.42%
> local-data            dir     active       170408272         1197444       160484916    0.70%
Hi

how can I restore a container backup with a smaller disk size not having ZFS ?
I have local-lvm as storage

should it be something like

Code:
pct restore /mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-102-2022_01_06-01_30_03.tar.zst --local-lvm ??????

I'm having issue with disk space and now I have my Home Assistant VM with io-error not running, so I'm looking to get some free space somewhere

I really don't know where this space is used (whitin the VM ?), and now the VM is not accessible so I can't check.

Any help will be appreciated.

Thanks
 
Last edited:
This is not ZFS specific.

I haven't used lvm, but --rootfs expects information for the new mount point. Check with pvesm status beforehand for the exact name/path, otherwise it will use what is specified in the CT conf. The Proxmox wiki suggests that you can use --rootfs without the mount point., e.g. --rootfs 4 = 4GB. See also here.
 
Last edited:
This is the result of pvesm status


Code:
Name                           Type     Status           Total            Used       Available        %
local                           dir     active        98559220        11509596        82000076   11.68%
local-lvm                   lvmthin     active       335646720       334807603          839116   99.75%


can I move some GB from local to local-lvm?


I'm trying this

Code:
pct restore 106 vzdump-lxc-102-2022_01_06-01_30_03.tar.zst --storage local-lvm 10

but I'm getting

Code:
400 too many arguments

106 is the new VM Id, right?

I took a look at the man page but I don't see how to set the new storage value

I've seen also this. Can it be also a solution?


Thanks
 
Last edited:
I am not an expert, so make sure you have backups ready before trying suggestions.

Code:
pct restore 110 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs local-lvm:10

Explanation:
  • 110 is the new container ID - it should not be in use
  • --rootfs provides "advanced" instructions to override settings for the root-file-system (look at the line in your /etc/pve/lxc/<CTID>.conf);
  • two of the settings for rootfs are submitted here, local-lvm (where it is created) and 10 (the size in GB)
  • afaik don't use --storage
 
Last edited:
  • Like
Reactions: Vittorio
Thanks @Helmut101

This is the correct command, I tried and all went well so I'm now saving a lot of disk space shrinking some containers that I created with too much size than really needed...
 
Last edited:
  • Like
Reactions: Helmut101