Rename-Move KVM Vm's VMID tags ?

fortechitsolutions

Renowned Member
Jun 4, 2008
434
46
93
I have just taken up management of 3 ProxVE machines which were setup as standalone boxes, and now they are desired to become a cluster.

All 3 machines started assigning VMID tags at 101, so we have multiple 101, 102, 103 ... VMIDs when considering all 3 systems together.

Fortunately most VMIDs are 'test boxes' however .. still a mess.

I am wondering, is there a 'fairly robust' way to change the VMID on a KVM Virtual machine, so that ProxVE will identify it by the new ID tag ? Or any recommended method to make the transition to "cluster ProxVE" as painless as possible in this circumstance ?

I already, for "fun" tried to rename a dir and qcow file each bearing a VMID tag (101) under the images/ subdir for the VM datastore (SSH admin access to the ProxVE host). But this made no apparent change to the web interface rendition of the VM's present. So clearly this is not enough.

Any comments / pointers are .. greatly appreciated. (Even if only to say, 'delete the test VMs, make the cluster, rebuild them again!" :)


Tim Chipman
 
I already, for "fun" tried to rename a dir and qcow file each bearing a VMID tag (101) under the images/ subdir for the VM datastore (SSH admin access to the ProxVE host). But this made no apparent change to the web interface rendition of the VM's present. So clearly this is not enough.

Also rename the config file /etc/qemu-server/VMID.conf - also edit image names there if you changed them.
 
Hi, just wanted to summarize back to the thread, in case this will help others.

I followed the process summarized below, and things went very smoothly.

Context:

-3 ProxVE machines, all installed as standalone
-all had VMs present at stock IDs (101,102,103) so there were 'collisions'
-Fortunately in my case only one system had production VMs so those were left unchanged / running; while the other 2 hosts the VMs were powered off during the renaming transition
- all VMs which were in 'collision' were renamed on 2 of the 3 hosts; the 3rd host was left alone
- after renaming was complete, all hosts were joined to a ProxVE cluster and this had no impact on the existing (renamed) VMs nor consequence / problems.
- now, new VMs created within the VM are automatically beginning at the correct VMID sequence point following the highest prior existing VMID tag that was present among all 3 systems.

So - it was very smooth / clean / easy. Only way it might have been 'hard' is if I had active VMs up and running, in production, with collision VMIDs - I suspect (*but am not sure*) that you can't rename a VMID for an actively running VM.

---Tim

------------paste-------------

GO TO DIR WHERE VM IMAGES LIVE; RENAME AS NEEDED.
17 cd /var/lib/vz/images/
19 mv 101 105
20 cd 105/
22 mv vm-101-disk.qcow2 vm-105-disk.qcow2

GO TO DIR WHERE QEMU CONFIG DATA LIVES, RENAME&EDIT:
23 cd /etc/qemu-server/
25 mv 101.conf 105.conf

then, EDIT FILE AND CHANGE REF TO QCOW NAME FROM 101->105:
26 vi 105.conf
27 mv .lock-101.conf .lock-105.conf



Note in my work-config here, all VMs were KVM based, there were no openvz VMs migrated. I believe editing config files under /etc/vz might be required / appropriate if you were in that situation of having collision VMID tags which were virtualized with OpenVZ rather than KVM.
 
Thanks fortechitsolutions and dietmar. I had several VMs to move around, so I wrote a shell script to do this.

Code:
#!/bin/sh


# vm_move.sh
# Moves VM to another VMID


# Syntax:
# ./vm_move.sh <Source VMID> <Destination VMID>


# Notes:
# VM must be stopped
# Destination VMID must not exist


# Path to remote images (usually /mnt/pve/<StorageName>/images)
REMOTE_DIR="/mnt/pve/PVE_NAS/images"
# Path to local images (/var/lib/vz/images)
LOCAL_DIR="/var/lib/vz/images"
# Path to qemu config files (/etc/qemu-server)
QEMU_DIR="/etc/qemu-server"


# Source VMID Argument
SRC_VM=$1
# Destination VMID Argument
DEST_VM=$2


# Make sure VMIDs are valid (100-999)
if [[ -z $(echo "$SRC_VM" | grep "^[0-9][0-9][0-9]$") ]]
then
    echo "$SRC_VM is not a valid Source VMID (100-999)!"
    exit 2
fi
if [[ -z $(echo "$DEST_VM" | grep "^[0-9][0-9][0-9]$") ]]
then
    echo "$DEST_VM is not a valid Destination VMID (100-999)!"
    exit 2
fi


# Make sure desination VMID does not exist
if [ -e $QEMU_DIR/$DEST_VM.conf ]; then
    echo "VM $DEST_VM already exists!"
    exit 2
fi
# Make sure VM is not running
if [ $(qm status $SRC_VM) != "stopped" ]; then
    echo "VM $SRC_VM is $(qm status $SRC_VM)!"
    exit 2
fi


# Local images
if [ -e $LOCAL_DIR/$SRC_VM ]; then
    # Move local image folders
    mv $LOCAL_DIR/$SRC_VM $LOCAL_DIR/$DEST_VM
    # Rename local image files
    rename "s/vm-$SRC_VM/vm-$DEST_VM/" $LOCAL_DIR/$DEST_VM/*
fi
# Remote images
if [ -e $REMOTE_DIR/$SRC_VM ]; then
    # Move remote image folders
    mv $REMOTE_DIR/$SRC_VM $REMOTE_DIR/$DEST_VM
    # Rename remote image files
    rename "s/vm-$SRC_VM/vm-$DEST_VM/" $REMOTE_DIR/$DEST_VM/*
fi


# Rename config file
mv $QEMU_DIR/$SRC_VM.conf $QEMU_DIR/$DEST_VM.conf
# Update config file
sed -i "s/:$SRC_VM\//:$DEST_VM\//" $QEMU_DIR/$DEST_VM.conf
sed -i "s/vm-$SRC_VM/vm-$DEST_VM/" $QEMU_DIR/$DEST_VM.conf
# Rename lock file
if [ -e $QEMU_DIR/.lock-$SRC_VM.conf ]; then
    mv $QEMU_DIR/.lock-$SRC_VM.conf $QEMU_DIR/.lock-$DEST_VM.conf -v
fi


# Finished
echo "Operation Completed"
 
Inspired by timdawg's script, but needing compatibility with Proxmox version 3.x and the ability to handle both CTs and VMs, I wrote the following script. I tried to maintain backward compatibility with version 2.x, but I no longer have a 2.x server to test with. If someone would be willing to test on a 2.x server, post the results here and I'll update the script if needed. The script has not been tested in a cluster and I suspect additional changes will be needed there. Be sure to create some test CTs and VMs then make sure the script works for them before risking your production servers. Post any feedback here. I hope others can benefit.

# cat bin/pve-move
Code:
#!/bin/sh

# pve-move
# Moves VM or CT to another VMID of the same type

# Author: dude4linux at forum.proxmox.com
# Original script vm_move.sh by timdawg at forum.proxmox.com

# Syntax:
# pve-move <Source VMID> <Destination VMID>

# Notes:
# Running VMs will be stopped and restarted
# Destination VMID must not exist
# Has not been tested on Proxmox version 2.x
# Additional changes are likely needed in a clustered environment

# Path to openvz config files
if [ -d /etc/pve/openvz ]; then
    OPENVZ="/etc/pve/openvz"      #version 3.x
else
    OPENVZ="/etc/vz/conf"         #version 2.x
fi
# Path to qemu config files
if [ -d /etc/pve/qemu-server ]; then
    QEMU="/etc/pve/qemu-server"   #version 3.x
else
    QEMU="/etc/qemu-server"       #version 2.x
fi

# Check syntax
if [ $# != 2 ]; then
    echo "Illegal Syntax:"
    echo "Syntax: ${0##*/} <Source VMID> <Destination VMID>"
    exit 2
fi

# Source VMID Argument
SRC=$1
# Destination VMID Argument
DEST=$2


# Make sure VMIDs are valid (100-999)
if [[ -z $(echo "$SRC" | grep "^[0-9][0-9][0-9]$") ]]
then
    echo "$SRC is not a valid Source VMID (100-999)!"
    exit 2
fi

if [[ -z $(echo "$DEST" | grep "^[0-9][0-9][0-9]$") ]]
then
    echo "$DEST is not a valid Destination VMID (100-999)!"
    exit 2
fi

# Make sure source VMID exists
if [ -e $OPENVZ/$SRC.conf ]; then
    TYPE="CT";
elif [ -e $QEMU/$SRC.conf ]; then
    TYPE="VM";
else
    echo "VMID $SRC does not exist!"
    exit 2
fi

# Make sure destination VMID does not exist
if [ -e $OPENVZ/$DEST.conf ]; then
    echo "CT $DEST already exists!"
    exit 2
fi
if [ -e $QEMU/$DEST.conf ]; then
    echo "VM $DEST already exists!"
    exit 2
fi

echo "Moving $TYPE $SRC to $TYPE $DEST"

# Move CT or VM
if [ $TYPE = "CT" ]; then
    # Begin CT move
    LOCAL="/var/lib/vz"
    STATUS=$(vzlist -aHo status $SRC)

    # Make sure CT is running
    if [ $STATUS != "running" ]; then
        echo "Starting $TYPE $SRC"
        vzctl start $SRC    # Start source container
    fi
    if [ $(vzlist -aHo status $SRC) != "running" ]; then
        echo "CT $SRC is $(vzlist -aHo status $SRC)!"
        exit 2
    fi

    # Checkpoint CT 
    vzctl chkpnt $SRC --dumpfile /tmp/openvz-renumber-dump.$SRC

    # Move files
    mv $LOCAL/private/$SRC $LOCAL/private/$DEST
    mv $LOCAL/root/$SRC $LOCAL/root/$DEST

    # Update config file
    sed -e "s/\/$SRC/\/$DEST/" -e "s/veth$SRC/veth$DEST/" $OPENVZ/$SRC.conf > $OPENVZ/$DEST.conf
    # Remove old config file
    rm -f $OPENVZ/$SRC.conf

    # Rename action scripts
    if [ -e $OPENVZ/$SRC.* ]; then
        rename "s/$SRC/$DEST/" $OPENVZ/$SRC.*
    fi

    # Update quota
    cp /var/lib/vzquota/quota.$SRC /var/lib/vzquota/quota.$DEST
    vzquota on $DEST -p /var/lib/vz/private/$DEST

    # Restore checkpoint
    vzctl restore $DEST --dumpfile /tmp/openvz-renumber-dump.$SRC

    # Restore original status
    if [ $STATUS = "stopped" ]; then
        echo "Stopping $TYPE $DEST"
        vzctl stop $DEST
    fi

    # Cleanup
    vzquota drop $SRC
    rm -f /tmp/openvz-renumber-dump.$SRC

else
    # Begin VM move
    # Path to remote images (usually /mnt/pve/<StorageName>/images)
    REMOTE="/mnt/pve/PVE_NAS/images"
    # Path to local images (/var/lib/vz/images)
    LOCAL="/var/lib/vz/images"

    STATUS=$(qm status $SRC | awk '{ print $2 }')

    # Make sure VM is not running
    if [ $STATUS != "stopped" ]; then
        echo "Shutting down $TYPE $SRC"
        qm shutdown $SRC -forceStop
    fi
    if [ $(qm status $SRC | awk '{ print $2 }') != "stopped" ]; then
        echo "VM $SRC is $(qm status $SRC)!"
        exit 2
    fi

    # Local images
    if [ -e $LOCAL/$SRC ]; then
        # Move local image folders
        mv $LOCAL/$SRC $LOCAL/$DEST
        # Rename local image files
        rename "s/vm-$SRC/vm-$DEST/" $LOCAL/$DEST/*
    fi

    # Remote images
    if [ -e $REMOTE/$SRC ]; then
        # Move remote image folders
        mv $REMOTE/$SRC $REMOTE/$DEST
        # Rename remote image files
        rename "s/vm-$SRC/vm-$DEST/" $REMOTE/$DEST/*
    fi

    # Update config file
    sed -e "s/:$SRC\//:$DEST\//" -e "s/vm-$SRC/vm-$DEST/" $QEMU/$SRC.conf > $QEMU/$DEST.conf
    # Remove old config file
    rm -f $QEMU/$SRC.conf

    # Rename lock file
    if [ -e $QEMU/.lock-$SRC.conf ]; then
        mv $QEMU/.lock-$SRC.conf $QEMU/.lock-$DEST.conf -v
    fi

    # Restart if status was running
    if [ $STATUS = "running" ]; then
        echo "Restarting $TYPE $DEST"
        qm start $DEST
    fi
fi

# Finished
echo "Operation Completed"
Edit #1: Use -forceStop option for 'qm shutdown' to simplify the VM shutdown.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!