I have recently done research and execution of migrating a PVE host on a larger disk to a smaller disk, including resize the root LV downward. This is my own notes that I believe can help people. Of course, you can always reinstall the PVE and copy over the configuration files from the old PVE host, but what would be the fun in that ?
The notes are useful because it touches upon a few important constraints in tools such as GParted, dd and LVM2. It may help others who need to perform these type of operations. Please feel free to correct my mistakes and I will update the notes accordingly.
Cheers.
The notes are useful because it touches upon a few important constraints in tools such as GParted, dd and LVM2. It may help others who need to perform these type of operations. Please feel free to correct my mistakes and I will update the notes accordingly.
Cheers.
- Constraints
- PVE has 3 LV: root, swap and data in one PV
- root and swap are regular LVs that can be expanded or shrunk with lvresize/lvreduce commands, as long as there is free space. You can only do this when booted into a Linux live CD (such as GParted).
- data is a thin pool volume . Only expansion (not shrinking) is supported. You must backup all LVMs and LXCs, delete them and re-create the data LV, and restore VMs and LXCs. You can do this when you are in PVE host admin console, without booting into a live CD.
- After LVs are shrunk, you must shrink the PV that contains them before GParted can shrink the containing partition to fit the smaller disk. The size of the PV limits how much the partition can be shrunk.
- Even if LVs are shrunk, the PV may still not be shrunk due to LVs are spread across the PV and the free space is non-contiguous. This is mainly a problem when a large root LV is shrunk, leaving behind a big free space between the root LV and the data LV. The data LV itself is not an issue as it needs to be deleted and re-created. Therefore, it's better to shrink the root LV first, and then recreate the data LV, thus eliminating the free space between the root and data LVs.
Otherwise, you must defragment the PV in order to shrink it. To do that there must be adequate amount of free space in PV to shuffle LVs around. There are 3 methods (see more details in Resources). Thus it's best to avoid this fragmentation problem entirely by shrinking root and/or swap, before recreating the data LV.- manually with pvmove
- pvshrink (a python package from github)
- lvm2defrag: a planning tool to generate a sequence of LVM commands
- GParted does not copy partitions with LVM volumes. The "copy" menu item is greyed out. It only copies partitions with ext2/3/4 file systems. This leaves us with dd (plus clonezilla) as the cloning tools.
- dd to clone disks will copy the primary GPT table, but will misplace the backup table, supposedly at the end of the new disk (when cloned to a larger disk), or miss the backup table (cloned to a smaller disk). What's more, the GPT table does not match the new disk size. This will confuse GParted. This can be fixed by using gdisk (gdisk /dev/sdX)'s recovery and transformation menu, to repair the primary GPT table and rebuild the backup table.
- It's quite safe to use pvresize to shrink the PV as it will refuse to proceed if blocked by fragmented LVs.
- PVE has 3 LV: root, swap and data in one PV
- General Steps
- In PVE host admin console, backup all VMs and LXCs. Remove them and remove the data LV:
-
Bash:
# login to the PVE host console backup VMs and LXCs remove them one by one # remove the data LV lvremove pve/data # backup PVE /etc/pve directory in case we have to reinstall PVE cd /etc/ tar -zcf /tmp/etcpve.tgz ./pve copy the tar ball offline
-
- Resize the swap and root LVs, and PV:
This can be done easily as there is plenty of free space in the PV after the data LV deletion.-
Code:
# boot into a GParted # use lvreduce/lvresize to adjust the root and swap LVs e2fschk -f /dev/mapper/pve-root resize2fs /dev/mapper/pve-root 15G lvreduce -L 15G /dev/pve/root # resize the PV to the smaller size to fit the smaller new disk # It's actually better to set it much smaller because there is no data LV yet. # This will speed up the dd cloning considerably (imagine dd clone the PV with only the swap and root LVs) # PVE's PV is typically in partition 3 pvresize --setphysicalvolumesize 30G /dev/sd[SRC]3
-
- Clone the Disk:
-
Bash:
# double check the disks, make sure the src & dst disks are correct lsblk # use dd command to clone, use count as the stop condition # count = new disk size / bs - a safe margin dd if=/dev/sdSRC of=/dev/sdDST bs=4M [count=NNN] status=progress conv=sync,noerror # use dd command to clone, let the disk full as the stop condition # this runs the risk that errors are encountered before the disk is full (e.g. sector read error) dd if=/dev/sdSRC of=/dev/sdDST bs=4M status=progress conv=sync
-
- Adjust the GPT table:
-
Bash:
# within GParted session, use gdisk to repair the GPT tables gdisk /dev/sdDST select recovery and transformation mode select d to build the backup table from the primary table w and quit # refresh devices in GParted in Gparted app, refresh devices, double check the result # resize the partition, to the final size # Note: in step 2, we may have set the PV to be much smaller than the new disk to speed up cloning. # Or the PV is set slightly smaller than the new disk for safety. # In both cases, now we resize the parition to the max size allowed. In GPartedapp, resize the parition to fill the new disk In console, resize the PV to fill the partition, typically is the partition 3, please double check. pvresize /dev/sdDST3
-
- Swap the disks and reboot into PVE, re-create the data LV and restore the VMs and LXCs
-
Bash:
# swap the hard drives, and boot and login the PVE host admin console # in command line, recreate the data volume. metadata_size is set as 3% of data LV. # First re-create pve/data to fill 90% of the free space to accmodate the meta data. lvcreate -l 90%FREE -n data pve # create thin pool, typically 3% of the data LV size lvconvert --type thin-pool --poolmetadatasize pve/data # expand the volume to fill the available free space lvextend -l +100%FREE pve/data # restore VMs and LXCs from backup
-
- In PVE host admin console, backup all VMs and LXCs. Remove them and remove the data LV:
- Resources
- 3 methods to defragment a PV in order to shrink it.
- 12.10 - How to shrink Ubuntu LVM logical and physical volumes? - Ask Ubuntu
This instruction explains how to shrink PV when multiple LVs spread across the PV and making shrinking difficult.
Last edited: