Hello,
i am thrilled to see pct remote_migrate and qm remote_mgirate in production.
I read:
https://forum.proxmox.com/threads/how-to-migrate-vm-from-one-pve-cluster-to-another.68762/page-2
and thought i could share our script we use to migrate VMs between Clusters. (over 400VMs so far).
Maybe it helps someone or someone wants to do some improvements or put some more live into it and release it on gitgub.
ini file is:
---------------
DST="10.10.50.1"
DSTHOSTNAME="cluster5-node01"
SRC_RBD="cluster3-rbd"
DST_RBD="cluster5-rbd"
It´s not perfect. It will
- wipe your local Snapshots
- transfer the rbd disk to the remote host
- shutdown the VM (make sure it shuts down cleanly!)
- transfers the rbd diff
- start the VM
Thats how we Upgrade the Clusters. We always have one emtpy/spare one to rotate in order to make the big Upgrades on an empty Cluster.
The Rocket-Sience here is just the rbd export-diff via SSH
i am thrilled to see pct remote_migrate and qm remote_mgirate in production.
I read:
https://forum.proxmox.com/threads/how-to-migrate-vm-from-one-pve-cluster-to-another.68762/page-2
and thought i could share our script we use to migrate VMs between Clusters. (over 400VMs so far).
Maybe it helps someone or someone wants to do some improvements or put some more live into it and release it on gitgub.
Code:
#!/bin/bash
set -e
SRCID=$1
DSTID=$2
# Überprüfen, ob die Konfigurationsdatei vorhanden ist
if [ ! -f proxmox_migrate_vm.ini ]; then
echo "Error: Configfile missing"
exit 1
fi
# Lade die Variablen aus der Konfigurationsdatei
source proxmox_migrate_vm.ini
# Überprüfen, ob alle Konfigurationsvariablen ausgefüllt sind
if [ -z "$DST" ] || [ -z "$SRCID" ] || [ -z "$DSTID" ] || [ -z "$DSTHOSTNAME" ] || [ -z "$SRC_RBD" ] || [ -z "$DST_RBD" ]; then
echo "Error: Not all configuration variables are filled or Argument1 (SRCID) or Argument2 (DSTID) is missing"
exit 1
fi
if [ ! -f "/etc/pve/nodes/`hostname`/qemu-server/$SRCID.conf" ]; then
echo "Config nicht gefunden: /etc/pve/nodes/`hostname`/qemu-server/$SRCID.conf"
exit 1
fi
if ssh $DST "test -e /etc/pve/nodes/`hostname`/qemu-server/$DSTID.conf"; then
echo "Wake Up! VM already exists!"
exit 1
fi
echo "Press ENTER to DELETE all snapshots"
read FOO
## get storage infos:
SRC_STORAGE="`grep "rbd:" /etc/pve/storage.cfg |awk '{ print $2 }'`"
scp $DST:/etc/pve/storage.cfg /tmp/dst-storage.cfg
DST_STORAGE="`grep "rbd:" /tmp/dst-storage.cfg |awk '{ print $2 }'`"
rm -f /tmp/dst-storage.cfg
cp /etc/pve/nodes/`hostname`/qemu-server/$SRCID.conf /tmp/pve.conf
sed -i "s/vm-${SRCID}-disk/vm-${DSTID}-disk/g" /tmp/pve.conf
sed -i "s/$SRC_STORAGE/$DST_STORAGE/g" /tmp/pve.conf
sed '/^$/q' -i /tmp/pve.conf
scp /tmp/pve.conf $DST:/etc/pve/nodes/$DSTHOSTNAME/qemu-server/$DSTID.conf
DISKS="`grep --only-matching -E \"disk-[0-9]\" /tmp/pve.conf `"
rm /tmp/pve.conf
# Copy firewall config and check if the firewall is in use
if [ -e "/etc/pve/firewall/$SRCID.conf" ]; then
scp "/etc/pve/firewall/$SRCID.conf" "$DST:/etc/pve/firewall/$DSTID.conf"
fi
for DISK in $DISKS; do
echo "Disk: $DISK"
rbd snap purge $SRC_RBD/vm-${SRCID}-$DISK
rbd snap create $SRC_RBD/vm-${SRCID}-$DISK@snap1
rbd export $SRC_RBD/vm-${SRCID}-$DISK@snap1 - | ssh $DST rbd import - $DST_RBD/vm-${DSTID}-$DISK
done
echo "Press ENTER to continue"
read FOO
qm shutdown $SRCID
sleep 20
until qm list |sed 's/ //g' | grep "^$SRCID" | grep stopped
do
echo "Waiting for VM to stop"
sleep 3
done
# jetzt DIFF übertragen
for DISK in $DISKS; do
rbd snap create $SRC_RBD/vm-${SRCID}-$DISK@snap2
ssh $DST "rbd snap create $DST_RBD/vm-${DSTID}-$DISK@snap1"
rbd export-diff --from-snap snap1 $SRC_RBD/vm-${SRCID}-$DISK@snap2 - | ssh $DST rbd import-diff - $DST_RBD/vm-${DSTID}-$DISK
echo "Cleaning up..."
rbd snap rm $SRC_RBD/vm-${SRCID}-$DISK@snap1
rbd snap rm $SRC_RBD/vm-${SRCID}-$DISK@snap2
rbd snap ls $SRC_RBD/vm-${SRCID}-$DISK
ssh $DST "rbd snap rm $DST_RBD/vm-${DSTID}-$DISK@snap1"
ssh $DST "rbd snap rm $DST_RBD/vm-${DSTID}-$DISK@snap2"
ssh $DST "rbd snap ls $DST_RBD/vm-${DSTID}-$DISK"
done
echo "Starting VM $DSTID..."
ssh $DST "qm start $DSTID"
echo "Finished."
ini file is:
---------------
DST="10.10.50.1"
DSTHOSTNAME="cluster5-node01"
SRC_RBD="cluster3-rbd"
DST_RBD="cluster5-rbd"
It´s not perfect. It will
- wipe your local Snapshots
- transfer the rbd disk to the remote host
- shutdown the VM (make sure it shuts down cleanly!)
- transfers the rbd diff
- start the VM
Thats how we Upgrade the Clusters. We always have one emtpy/spare one to rotate in order to make the big Upgrades on an empty Cluster.
The Rocket-Sience here is just the rbd export-diff via SSH