pct restore 101 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs encrypted_zfs:100
101
is the new CT id (if you want to get it the old one, remove the container first and then use that number)--rootfs encrypted_zfs:100
tells the restore to create a new root fs with size 100 GB;encrypted_zfs
is the volume, where the rootfs should be created; example is my encrypted zfs volumepvesm status
pvesm status
> Name Type Status Total Used Available %
> backups dir active 7750838732 4888031492 2784655920 63.06%
> encrypted_zfs zfspool active 462932852 8350444 454582408 1.80%
> local dir active 59600812 28260908 28282652 47.42%
> local-data dir active 170408272 1197444 160484916 0.70%
HiThis was far too complicated.
1. Create a backup
2. Restore the backup with smaller size root disk:
Code:pct restore 101 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs encrypted_zfs:100
where:
e.g.:
101
is the new CT id (if you want to get it the old one, remove the container first and then use that number)--rootfs encrypted_zfs:100
tells the restore to create a new root fs with size 100 GB;encrypted_zfs
is the volume, where the rootfs should be created; example is my encrypted zfs volume- you can check existing volumes with
pvesm status
Code:pvesm status > Name Type Status Total Used Available % > backups dir active 7750838732 4888031492 2784655920 63.06% > encrypted_zfs zfspool active 462932852 8350444 454582408 1.80% > local dir active 59600812 28260908 28282652 47.42% > local-data dir active 170408272 1197444 160484916 0.70%
pct restore /mnt/pve/NFS-QNAP-Proxmox-Backup/dump/vzdump-lxc-102-2022_01_06-01_30_03.tar.zst --local-lvm ??????
io-error
not running, so I'm looking to get some free space somewhere--rootfs
expects information for the new mount point. Check with pvesm status
beforehand for the exact name/path, otherwise it will use what is specified in the CT conf. The Proxmox wiki suggests that you can use --rootfs
without the mount point., e.g. --rootfs 4
= 4GB. See also here.pvesm status
Name Type Status Total Used Available %
local dir active 98559220 11509596 82000076 11.68%
local-lvm lvmthin active 335646720 334807603 839116 99.75%
pct restore 106 vzdump-lxc-102-2022_01_06-01_30_03.tar.zst --storage local-lvm 10
400 too many arguments
pct restore 110 /data/dump/vzdump-lxc-100-2022_01_03-12_20_07.tar.zst --rootfs local-lvm:10
110
is the new container ID - it should not be in use--rootfs
provides "advanced" instructions to override settings for the root-file-system (look at the line in your /etc/pve/lxc/<CTID>.conf
);rootfs
are submitted here, local-lvm
(where it is created) and 10
(the size in GB)--storage