Duplicating A Node Through Backup And Restore

Tony C

Member
May 28, 2020
29
2
23
Hello all,

I wish to duplicate a working PVE node onto another machine but without copying the VMs across. Some points:
  • VM storage is done using lvmthin.
  • PCE version 6.3-3.
  • Quite some tweaking and work has gone on on the node to meet my requirements, nothing outlandish but not a case of simply installing a fresh instance of PVE.
  • I intend to shutdown and a backup the root partition of the node and then restore it on a new machine in a different location, along with all the requisite PVs, VGs and LVs like pveroot/pveswap. I have done this sort of thing with other Linux distros and so the core principle is fine. Obviously some config like IP address and the certs will have to change both in the OS and under PVE itself.
  • Beyond the backup, I don't want to change the original node, i.e. remove all the VMs and then restore them again.
  • In the longer term, if necessary (as this is hopefully a one off), I'll write ansible scripts to cover my customisations but it's not worth/can't invest the time now to do this.
  • I don't want to transfer the VMs.
So my question is, if I simply delete the VM conf files under the relevant /etc/pve/qemu-server directory will that completely remove those VMs from any databases once PVE is restarted or is the act of deleting a VM by the normal route more than in effect removing a conf file and associated LVs?

MTIA, Tony.
 
there might also be a corresponding firewall config, and references to the VM in backup/replication/HA configs.
 
Hi Fabian, many thanks for the info. I take it the config SQL database will simply update itself on finding deleted config files...
 
you can also do a walk through /etc/pve and look at (and optionally, remove/edit) config files.. there aren't that many ;)
 
Ok just to give some feedback in case someone else wants to do this...

It actually works surprisingly well but the system only had one node and wasn't a part of a cluster. However use these instructions at your own risk... You've been warned!

So I actually backed up the system, minus the VM disk images (stored as lvm-thin volumes on a second disk). The VM configurations were backed up as I didn't want to change the original system - hence the original question above.

The backup and restore process (onto the new system) is pretty standard stuff and I won't go into much detail here other than some terse bullet points:
  • Backing Up Existing System- I used a live CD (based on Ubuntu 18.04 actually but probably best to use a Debian one!) and the dump/restore utilities to backup the root volume. Use dump's E switch to exclude files that you don't want to backup (like ISO images etc).
    • The find command can be used to general a list of files to exclude (find /mnt/var/lib/vz/template/iso -type f -printf "%i\n" > exclude-files and then backup with something like dump 0Ef exclude-files - /dev/mapper/pve-root | pigz -c > root.dmp.gz.
    • Use gdisk -l on the root device to dump out the partition table along with blkid to dump out filesystem and swap partition labels and UUIDs. Save these to disk. E.g. gdisk -l /dev/nvme0n1> partition-info.txt and blkid > blkid-info.txt.
    • Use assorted pvs/vgs/lvs commands to dump out the LVM information. Again save this to backup media.
  • Restoring Onto New System- Using the same live CD:
    • Use gdisk to recreate the partition table. Make sure that the first partition is the same size and starts at the same offset as on the original system. Likewise make sure that the second partition starts at the same offset as on the original system but feel free to resize that and any remaining partitions to suit your needs. Also make sure you tag each partition with the correct type codes.
    • Use the relevant pvcreate, vgcreate and lvcreate command to recreate the pve-swap and pve-root volumes.
    • Format the volumes using mkswap and mkfs.ext4. Use the relevant switches to explicitly specify any volume labels (for file systems) and UUIDs (for both file systems and swap partitions). Get the label and UUID information from the blkid-info.txt file that you created earlier.
    • Mount /dev/mapper/pve-root, cd into it and restore the system with pigz -cd <Backup Path>/root.dmp.gz | restore if - and then inside restore do add * and then extract.
    • Switch to the restored root partition by using the go-chroot script detailed below.
    • Install the boot loader with something like grub-install /dev/sda.
    • Update grub with update-grub.
    • Update networking/IP address details. Typically /etc.network/interfaces and possibly /etc/resolv.conf and /etc/ntp.conf depending upon whether you're staying on the same network or not.
    • If need be change the hostname of the system. Something like find /etc -xdev -type f -print0 | xargs -0 fgrep -l <Old Hostname> can help to find the files that need changing. You could do /var as well but you need to take the results with a pitch of salt and be selective and sensible about what you change.
    • Switch back out of the restored root partition and unmount it.
    • Reboot, keeping you fingers crossed!
  • Clean Up- Things I did:
    • Generate and replace the web site certificates for the PVE web UI.
    • Remove all VMs, it doesn't matter about the missing volumes, PVE seems quite happy to remove a VM with a missing disk.
    • Shutdown the PVE services and the clear out stuff under /var/lib/rrdcached and /var/log/pve/tasks and /var/log/pveproxy.
    • Restart the system.
  • Create A Test VM - And check that everything works as expected. Check the system log for any nasty messages with something like journalctl -b -p 3.
The go-chroot script:
Bash:
#!/bin/bash
mount --bind /dev /mnt/dev
mount --bind /dev/pts /mnt/dev/pts
mount --bind /proc /mnt/proc
mount --bind /sys /mnt/sys
mv /mnt/etc/resolv.conf /mnt/etc/resolv.conf-orig
cp /etc/resolv.conf /mnt/etc/resolv.conf
export HOME=/root
chroot /mnt /bin/bash
rm /mnt/etc/resolv.conf
mv /mnt/etc/resolv.conf-orig /mnt/etc/resolv.conf
rm -rf /mnt/tmp/* /mnt/tmp/.??*
umount /mnt/dev/pts
umount /mnt/dev
umount /mnt/proc
umount /mnt/sys
exit 0

The hacky script for disabling PVE so that it doesn't automatically start up on boot, called disable-enable-pve:
Bash:
#!/bin/sh

usage()
{
    echo "Usage: disable-enable-pve enable|disable" >&2
}

# I have already permanently disabled this timer, you may have not.
# services="pve-daily-update.timer
services="pvesr.timer
          pve-manager.service
          pvenetcommit.service
          pve-lxc-syscalld.service
          pvebanner.service
          pve-firewall.service
          pve-ha-lrm.service
          pvestatd.service
          pve-ha-crm.service
          pve-guests.service
          pvedaemon.service
          pveproxy.service
          pve-cluster.service
          lxc-monitord.service
          lxc-net.service
          lxc.service
          lxcfs.service
          rrdcached.service
          spiceproxy.service
          iscsi.service
          iscsid.service
          open-iscsi.service"

if test $# -ne 1 -o \( "$1" != "enable" -a "$1" != "disable" \)
then
    usage
    exit 1
fi

for s in $services
do
    echo "${s}:"
    if test "$1" = "enable"
    then
        systemctl unmask $s > /dev/null
        systemctl enable $s > /dev/null
    else
        systemctl stop $s > /dev/null
        systemctl disable $s > /dev/null
        systemctl mask $s > /dev/null
    fi
done
systemctl list-unit-files | egrep 'pve|lxc|rddcache|iscsi' | sort

if test "$1" = "enable"
then
    echo "All PVE services re-enabled, please reboot."
fi

exit 0

Hope this is of use.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!