[TUTORIAL] How to Migrate PVE to a Smaller Disk

pvefanmark

New Member
Jan 29, 2024
16
0
1
I have recently done research and execution of migrating a PVE host on a larger disk to a smaller disk, including resize the root LV downward. This is my own notes that I believe can help people. Of course, you can always reinstall the PVE and copy over the configuration files from the old PVE host, but what would be the fun in that ? ;)

The notes are useful because it touches upon a few important constraints in tools such as GParted, dd and LVM2. It may help others who need to perform these type of operations. Please feel free to correct my mistakes and I will update the notes accordingly.

Cheers.

  • Constraints
    • PVE has 3 LV: root, swap and data in one PV
      • root and swap are regular LVs that can be expanded or shrunk with lvresize/lvreduce commands, as long as there is free space. You can only do this when booted into a Linux live CD (such as GParted).
      • data is a thin pool volume . Only expansion (not shrinking) is supported. You must backup all LVMs and LXCs, delete them and re-create the data LV, and restore VMs and LXCs. You can do this when you are in PVE host admin console, without booting into a live CD.
      • After LVs are shrunk, you must shrink the PV that contains them before GParted can shrink the containing partition to fit the smaller disk. The size of the PV limits how much the partition can be shrunk.
      • Even if LVs are shrunk, the PV may still not be shrunk due to LVs are spread across the PV and the free space is non-contiguous. This is mainly a problem when a large root LV is shrunk, leaving behind a big free space between the root LV and the data LV. The data LV itself is not an issue as it needs to be deleted and re-created. Therefore, it's better to shrink the root LV first, and then recreate the data LV, thus eliminating the free space between the root and data LVs.

        Otherwise, you must defragment the PV in order to shrink it. To do that there must be adequate amount of free space in PV to shuffle LVs around. There are 3 methods (see more details in Resources). Thus it's best to avoid this fragmentation problem entirely by shrinking root and/or swap, before recreating the data LV.
        • manually with pvmove
        • pvshrink (a python package from github)
        • lvm2defrag: a planning tool to generate a sequence of LVM commands
    • GParted does not copy partitions with LVM volumes. The "copy" menu item is greyed out. It only copies partitions with ext2/3/4 file systems. This leaves us with dd (plus clonezilla) as the cloning tools.
    • dd to clone disks will copy the primary GPT table, but will misplace the backup table, supposedly at the end of the new disk (when cloned to a larger disk), or miss the backup table (cloned to a smaller disk). What's more, the GPT table does not match the new disk size. This will confuse GParted. This can be fixed by using gdisk (gdisk /dev/sdX)'s recovery and transformation menu, to repair the primary GPT table and rebuild the backup table.
    • It's quite safe to use pvresize to shrink the PV as it will refuse to proceed if blocked by fragmented LVs.
  • General Steps
    • In PVE host admin console, backup all VMs and LXCs. Remove them and remove the data LV:
      • Bash:
        # login to the PVE host console
        backup VMs and LXCs
        remove them one by one
        
        # remove the data LV
        lvremove pve/data
        
        # backup PVE /etc/pve directory in case we have to reinstall PVE
        cd /etc/
        tar -zcf /tmp/etcpve.tgz ./pve
        copy the tar ball offline
    • Resize the swap and root LVs, and PV:
      This can be done easily as there is plenty of free space in the PV after the data LV deletion.
      • Code:
        # boot into a GParted
        
        # use lvreduce/lvresize to adjust the root and swap LVs
        e2fschk -f /dev/mapper/pve-root
        resize2fs /dev/mapper/pve-root 15G
        lvreduce -L 15G /dev/pve/root
        
        # resize the PV to the smaller size to fit the smaller new disk
        # It's actually better to set it much smaller because there is no data LV yet.
        # This will speed up the dd cloning considerably (imagine dd clone the PV with only the swap and root LVs)
        # PVE's PV is typically in partition 3
        pvresize --setphysicalvolumesize 30G /dev/sd[SRC]3
    • Clone the Disk:
      • Bash:
        # double check the disks, make sure the src & dst disks are correct
        lsblk
        
        # use dd command to clone, use count as the stop condition
        # count = new disk size / bs - a safe margin
        dd if=/dev/sdSRC of=/dev/sdDST bs=4M [count=NNN] status=progress conv=sync,noerror
        
        # use dd command to clone, let the disk full as the stop condition
        # this runs the risk that errors are encountered before the disk is full (e.g. sector read error)
        dd if=/dev/sdSRC of=/dev/sdDST bs=4M status=progress conv=sync
    • Adjust the GPT table:
      • Bash:
        # within GParted session, use gdisk to repair the GPT tables
        gdisk /dev/sdDST
        select recovery and transformation mode
        select d to build the backup table from the primary table
        w and quit
        
        # refresh devices in GParted
        in Gparted app, refresh devices, double check the result
        
        # resize the partition, to the final size
        # Note: in step 2, we may have set the PV to be much smaller than the new disk to speed up cloning.
        # Or the PV is set slightly smaller than the new disk for safety.
        # In both cases, now we resize the parition to the max size allowed.
        In GPartedapp, resize the parition to fill the new disk
        In console, resize the PV to fill the partition, typically is the partition 3, please double check.
        pvresize /dev/sdDST3
    • Swap the disks and reboot into PVE, re-create the data LV and restore the VMs and LXCs
      • Bash:
        # swap the hard drives, and boot and login the PVE host admin console
        
        # in command line, recreate the data volume. metadata_size is set as 3% of data LV.
        # First re-create pve/data to fill 90% of the free space to accmodate the meta data.
        lvcreate -l 90%FREE -n data pve
        
        # create thin pool, typically 3% of the data LV size
        lvconvert --type thin-pool --poolmetadatasize  pve/data
        
        # expand the volume to fill the available free space
        lvextend -l +100%FREE pve/data
        
        # restore VMs and LXCs from backup
  • Resources
 
Last edited:
Thanks for the tutorial. (Maybe mark it as such; At the top of the thread, choose the Edit thread button, then from the (no prefix) dropdown choose Tutorial).

I've been through similar procedures myself in the past, until I decided:

1. For the above procedure you anyway need full & restorable backups of all VMs & LXCs. (You've got that in your post).
2. For such a serious server-host procedure, one definitely wants a copy/backup of the full PVE host, as a fallback if something goes wrong.
3. Proxmox server host, should contain as little as possible changes to it's root OS etc. (Use LXCs etc. for everything when possible).
4. If you do have to make changes to the Proxmox server host; document each one in a safe place.

In light of the fact that there is no easy procedure to accomplish point 2 above, we have 2 choices:

(a) Do a fresh install on the new disk & restore all VMs & LXCs from the backups, add any changes you may have to the Proxmox server host (4 above).
(b) dd the original disk to another disk of at least the same size, & then swap out the disks, run-through your whole procedure & then insert the smaller disk.

With both (a) & (b) we will have the original working OS disk as a fallback.
I prefer (a) as it involves less work & I always like a fresh install.

Anyway thanks again for all your documentation.
 
Thanks for the tutorial. (Maybe mark it as such; At the top of the thread, choose the Edit thread button, then from the (no prefix) dropdown choose Tutorial).

I've been through similar procedures myself in the past, until I decided:

1. For the above procedure you anyway need full & restorable backups of all VMs & LXCs. (You've got that in your post).
2. For such a serious server-host procedure, one definitely wants a copy/backup of the full PVE host, as a fallback if something goes wrong.
3. Proxmox server host, should contain as little as possible changes to it's root OS etc. (Use LXCs etc. for everything when possible).
4. If you do have to make changes to the Proxmox server host; document each one in a safe place.

In light of the fact that there is no easy procedure to accomplish point 2 above, we have 2 choices:

(a) Do a fresh install on the new disk & restore all VMs & LXCs from the backups, add any changes you may have to the Proxmox server host (4 above).
(b) dd the original disk to another disk of at least the same size, & then swap out the disks, run-through your whole procedure & then insert the smaller disk.

With both (a) & (b) we will have the original working OS disk as a fallback.
I prefer (a) as it involves less work & I always like a fresh install.

Anyway thanks again for all your documentation.
You have a well reasoned thinking process. I didn't know precisely what's involved to clone instead of reinstalling PVE. Also the steps of reinstalling and restoring configuration is also pretty nebulous to me. Especially considering the updates I have received and I am not certain my customized configuration mismatch what's expected.

You are correct, I should do a OS image ( of the PVE root and swap) in case something goes wrong. I instead, did a backup of /etc/pve files in case something goes terribly wrong and I had to resinstall PVE. I will add this notes to the above tutorial. Thanks!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!