Changing VMID of a VM

of course there is :)

you can use the CLI tool qmrestore. so if you've made a backup and the archive is called vzdump-qemu-666-2022_02_02-12_25_00.vma.zst then you can restore it to a new VMID:
Code:
qmrestore vzdump-qemu-666-2022_02_02-12_25_00.vma.zst 999

and it should make a new VM with the ID 999.


edit:
you can also do it over the GUI too.
instead of clicking the VM and restoring the backup there, you can click on the storage where you made the backup and restore it to an arbitrary VMID ;)

hope this helps!

Edit: ok I just tried this to test it and works all Backups are still in my backup drive.
Just needed to change my backup scheduler for the new VM ID and all works great thanks

Question?
If you do this via backup as mentioned. What happens to all the other VM backups?
are they still going to be associated with the new VM ID you changed it too ?
Obviously scheduler must be changed to new VM ID. But old backups are still accessible after you delete old VM ?
 
Last edited:
What happens to all the other VM backups?
are they still going to be associated with the new VM ID you changed it too ?
no. restoring the backup to another VM doesn't change the association of the backup file to the old VM.

But old backups are still accessible after you delete old VM ?
yes they should be still located on your backup storage
 
  • Like
Reactions: Spirog
Maybe my experience will be useful to someone. Needed to send backups from multiple servers (not in a cluster) to a single server PBS. I changed this script to suit my situation. The previous script did not work on my proxmox 7.1. And I had to make a massive change. I use only containers.

Bash:
#!/usr/bin/env bash

export oldVMID=$1 newVMID=$2 vgNAME=$3 ;  \
for i in $(lvs -a|grep $vgNAME | awk '{print $1}' | grep $oldVMID); \
do lvrename /dev/$vgNAME/vm-$oldVMID-disk-$(echo $i | awk '{print substr($0,length,1)}') /dev/$vgNAME/vm-$newVMID-disk-$(echo $i | awk '{print substr($0,length,1)}'); done; \

## for VM
#sed -i "s/$oldVMID/$newVMID/g" /etc/pve/qemu-server/$oldVMID.conf; \
#mv /etc/pve/qemu-server/$oldVMID.conf /etc/pve/qemu-server/$newVMID.conf; \

## for CT
sed -i "s/$oldVMID/$newVMID/g" /etc/pve/lxc/$oldVMID.conf; \
mv /etc/pve/lxc/$oldVMID.conf /etc/pve/lxc/$newVMID.conf; \

unset oldVMID newVMID vgNAME;

for a one-time run, you can use (102 to 202)
Bash:
/root/change_vmid.cmd 102 202 pve

for the mass this launch (104-109 to 204-209)
Bash:
for i in {4..9}; do /root/change_vmid.cmd 10$i 20$i pve; done
 
  • Like
Reactions: herzkerl and Spirog
Works like a charm. It boots and works fine.

For the people that asks for the VG, run lvs -a and the second column shows your VG.

Thanks for the script. I've do a sh script from it. It do a search if it's on a volume group. If not, it exits. Only put the old VMID and the New. The script does the rest

Code:
#!/bin/bash

echo Put the VMID to change
read oldVMID
case $oldVMID in
    ''|*[!0-9]*)
        echo bad input. Exiting
        exit;;
    *)
        echo Old VMID - $oldVMID ;;
esac
echo
echo Put the new VMID
read newVMID
case $newVMID in
    ''|*[!0-9]*)
        echo bad input. Exiting
        exit;;
    *)
        echo New VMID - $newVMID ;;
esac
echo

vgNAME="$(lvs --noheadings -o lv_name,vg_name | grep $oldVMID | awk -F ' ' '{print $2}' | uniq -d)"

case $vgNAME in
    "")
        echo Machine not in Volume Group. Exiting
        exit;;
    *)
        echo Volume Group - $vgNAME ;;
esac

for i in $(lvs -a|grep $vgNAME | awk '{print $1}' | grep $oldVMID);
do lvrename $vgNAME/vm-$oldVMID-disk-$(echo $i | awk '{print substr($0,length,1)}') vm-$newVMID-disk-$(echo $i | awk '{print substr($0,length,1)}');
done;
sed -i "s/$oldVMID/$newVMID/g" /etc/pve/qemu-server/$oldVMID.conf;
mv /etc/pve/qemu-server/$oldVMID.conf /etc/pve/qemu-server/$newVMID.conf;

echo Ta-Da!
Can you make this script work with zfs?
 
vim /etc/pve/nodes/proxmox/qemu-server/106.conf
Linea 8: scsi0: local-zfs:vm-104-disk-0,size=60G

Rename the conf file:
mv /etc/pve/nodes/proxmox/qemu-server/106.conf /etc/pve/nodes/proxmox/qemu-server/104.conf

List the ZFS volume or all:
zfs list -t all
...
rpool/data/vm-106-disk-0 1,11G 7,02T 1,10G -
...

Rename it with zfs rename:
zfs rename rpool/data/vm-106-disk-0 rpool/data/vm-104-disk-0

Host restart is not needed
qm start 104
 
Works like a charm. It boots and works fine.

For the people that asks for the VG, run lvs -a and the second column shows your VG.

Thanks for the script. I've do a sh script from it. It do a search if it's on a volume group. If not, it exits. Only put the old VMID and the New. The script does the rest

Code:
#!/bin/bash

echo Put the VMID to change
read oldVMID
case $oldVMID in
    ''|*[!0-9]*)
        echo bad input. Exiting
        exit;;
    *)
        echo Old VMID - $oldVMID ;;
esac
echo
echo Put the new VMID
read newVMID
case $newVMID in
    ''|*[!0-9]*)
        echo bad input. Exiting
        exit;;
    *)
        echo New VMID - $newVMID ;;
esac
echo

vgNAME="$(lvs --noheadings -o lv_name,vg_name | grep $oldVMID | awk -F ' ' '{print $2}' | uniq -d)"

case $vgNAME in
    "")
        echo Machine not in Volume Group. Exiting
        exit;;
    *)
        echo Volume Group - $vgNAME ;;
esac

for i in $(lvs -a|grep $vgNAME | awk '{print $1}' | grep $oldVMID);
do lvrename $vgNAME/vm-$oldVMID-disk-$(echo $i | awk '{print substr($0,length,1)}') vm-$newVMID-disk-$(echo $i | awk '{print substr($0,length,1)}');
done;
sed -i "s/$oldVMID/$newVMID/g" /etc/pve/qemu-server/$oldVMID.conf;
mv /etc/pve/qemu-server/$oldVMID.conf /etc/pve/qemu-server/$newVMID.conf;

echo Ta-Da!

This script works like a charm. I love it, Thank you very much!
 
Maybe my experience will be useful to someone. Needed to send backups from multiple servers (not in a cluster) to a single server PBS. I changed this script to suit my situation. The previous script did not work on my proxmox 7.1. And I had to make a massive change. I use only containers.

Bash:
#!/usr/bin/env bash

export oldVMID=$1 newVMID=$2 vgNAME=$3 ;  \
for i in $(lvs -a|grep $vgNAME | awk '{print $1}' | grep $oldVMID); \
do lvrename /dev/$vgNAME/vm-$oldVMID-disk-$(echo $i | awk '{print substr($0,length,1)}') /dev/$vgNAME/vm-$newVMID-disk-$(echo $i | awk '{print substr($0,length,1)}'); done; \

## for VM
#sed -i "s/$oldVMID/$newVMID/g" /etc/pve/qemu-server/$oldVMID.conf; \
#mv /etc/pve/qemu-server/$oldVMID.conf /etc/pve/qemu-server/$newVMID.conf; \

## for CT
sed -i "s/$oldVMID/$newVMID/g" /etc/pve/lxc/$oldVMID.conf; \
mv /etc/pve/lxc/$oldVMID.conf /etc/pve/lxc/$newVMID.conf; \

unset oldVMID newVMID vgNAME;

for a one-time run, you can use (102 to 202)
Bash:
/root/change_vmid.cmd 102 202 pve

for the mass this launch (104-109 to 204-209)
Bash:
for i in {4..9}; do /root/change_vmid.cmd 10$i 20$i pve; done

hello
I would like to test your script, but i need to create a sh file and copy it to the root folder?
can you explain me better?
 
"click on the storage where you made the backup and restore it to an arbitrary VMID"

Thank you very much, that did the trick ! ;)
Thanks, this worked fairly easily but did replicate the original VM's IP. Until I shutdown and removed that VM, I had duplicate IP's.
 
Hello everyone. I've updated script to change VMID.

Bash:
#!/bin/bash
echo Put the VM type to change '(lxc, qemu)'
read -r VM_TYPE
case "$VM_TYPE" in
"lxc") VM_TYPE="lxc" ;;
"qemu") VM_TYPE="qemu-server" ;;
*)
  echo bad input. Exiting
exit
  ;;
esac
echo
echo Put the VMID to change
read -r OLD_VMID
case $OLD_VMID in
'' | *[!0-9]*)
  echo bad input. Exiting
exit
  ;;
*)
  echo Old VMID - "$OLD_VMID"
  ;;
esac
echo
echo Put the new VMID
read -r NEW_VMID
case $NEW_VMID in
'' | *[!0-9]*)
  echo bad input. Exiting
exit
  ;;
*)
  echo New VMID - "$NEW_VMID"
  ;;
esac
echo

VG_NAME="$(lvs --noheadings -o lv_name,vg_name | grep "$OLD_VMID" | awk -F ' ' '{print $2}' | uniq -d)"

case "$VG_NAME" in
"")
  echo Machine not in Volume Group. Exiting
  exit
  ;;
*)
  echo Volume Group - "$VG_NAME"
  ;;
esac

for volume in $(lvs -a | grep "$VG_NAME" | awk '{print $1}' | grep "$OLD_VMID"); do
  lvrename "$VG_NAME" "$volume" "${"$volume"//$OLD_VMID/$NEW_VMID}"
done
sed -i "s/$OLD_VMID/$NEW_VMID/g" /etc/pve/"$VM_TYPE"/"$OLD_VMID".conf
mv /etc/pve/"$VM_TYPE"/"$OLD_VMID".conf /etc/pve/"$VM_TYPE"/"$NEW_VMID".conf

echo Done!
 
  • Like
Reactions: pmra and nayz
Just restore a backup to the new VM ID. Or copy the configuration without the lines for the virtual disks (on the console) and move the disks over via the Proxmox GUI.
I’m aware, I’m just saying there should be a more seamless way of doing it, without having to go through hoops and workarounds just to change a simple thing like VM ID.
 
I’m aware, I’m just saying there should be a more seamless way of doing it, without having to go through hoops and workarounds just to change a simple thing like VM ID.
Why? How often do you have to change the IDs? Based on my experience, I really don't need this and we have hundreds of machines running.
 
Why? How often do you have to change the IDs? Based on my experience, I really don't need this and we have hundreds of machines running.
Everyone’s use case is different. You’ve just said it’s no use for you, but for others it clearly is, otherwise it wouldn’t be asked.

I’m not sure why people are always against making things easier for others.
 
Everyone’s use case is different. You’ve just said it’s no use for you, but for others it clearly is, otherwise it wouldn’t be asked.
I don't want to be condensencing, just statistics:
If there would be so many people asking for it, why did no one file a bugreport, which is the proper way of getting things done around PVE?
It was mentioned years ago, but nothing changed since then.

I’m not sure why people are always against making things easier for others.
I'm not against this in general, yet we should focus on the important stuff. That is hasn't happend yet it always a good indicator, that it is not that important to people or most of them just use backup & restore, which is THE easiest method of getting it done.
 
I don't want to be condensencing, just statistics:
If there would be so many people asking for it, why did no one file a bugreport, which is the proper way of getting things done around PVE?
It was mentioned years ago, but nothing changed since then.


I'm not against this in general, yet we should focus on the important stuff. That is hasn't happend yet it always a good indicator, that it is not that important to people or most of them just use backup & restore, which is THE easiest method of getting it done.

Newcomers are most likely not aware of the 'proper' way of reporting/requesting features. Hence why people just use public forums, reddit etc,

It's clear Proxmox has other priorities in hand, tho something like this "shouldn't" be too much work to implement. However, I hope we can agree making things easier for new/existing users from accessibility stand point, should be considered.
 
Newcomers are most likely not aware of the 'proper' way of reporting/requesting features. Hence why people just use public forums, reddit etc,
Sure, therefore it was said to do so in this very thread, yet no one took the time to do it.

It's clear Proxmox has other priorities in hand, tho something like this "shouldn't" be too much work to implement. However, I hope we can agree making things easier for new/existing users from accessibility stand point, should be considered.
Totally agreed, yet as with EVERY open source software, there are ways to do this / proper procedures you have to follow and if it has been already pointed out how to do this ... what is there more to do from a vendor point of view? In other projects, the discussion thread will be locked after such a vendor answer, because every further discussion is rendered moot (from the point of the vendor).
 
  • Like
Reactions: nayz
Certainly a corner case where a move like this is less trouble than a restore, but for anyone caught in it I combined the above resources and fixes (tested on 7.4-17 with VMs on lvg and zfs).

Obviously apply at own risk. Fixing a mistake by hand is doable, but not great fun. For example I have done zero testing of the hilarious case where the destination VMID already exists or even more fun: Only some volumes with that VMID exist - left behind by earlier shenanigans.

TLDR; Really don't use this. But also there it is for you to use:
Code:
#!/usr/bin/env bash

echo Put the VM type to change '(lxc, qemu)'
read -r VM_TYPE
case "$VM_TYPE" in
"lxc") VM_TYPE="lxc" ;;
"qemu") VM_TYPE="qemu-server" ;;
*)
  echo bad input. Exiting
exit
  ;;
esac
echo
echo Put the VMID to change
read -r OLD_VMID
case $OLD_VMID in
'' | *[!0-9]*)
  echo bad input. Exiting
exit
  ;;
*)
  echo Old VMID - "$OLD_VMID"
  ;;
esac
echo
echo Put the new VMID
read -r NEW_VMID
case $NEW_VMID in
'' | *[!0-9]*)
  echo bad input. Exiting
exit
  ;;
*)
  echo New VMID - "$NEW_VMID"
  ;;
esac
echo

VG_NAME="$(lvs --noheadings -o lv_name,vg_name | grep "$OLD_VMID" | awk -F ' ' '{print $2}' | uniq)"

case "$VG_NAME" in
"")
  echo Machine not in Volume Group. Exiting
  exit
  ;;
*)
  echo Volume Group - "$VG_NAME"
  ;;
esac

for volume in $(lvs -a | grep "$VG_NAME" | awk '{print $1}' | grep "$OLD_VMID"); do
  newVolume="${volume//"${OLD_VMID}"/"${NEW_VMID}"}"
  lvrename "$VG_NAME" "$volume" "$newVolume"
done

for volume in $(zfs list -t all | awk '{print $1}' | grep "vm-${OLD_VMID}-disk"); do
  newVolume="${volume//"${OLD_VMID}"/"${NEW_VMID}"}"
  zfs rename "$volume" "$newVolume"
done

sed -i "s/$OLD_VMID/$NEW_VMID/g" /etc/pve/"$VM_TYPE"/"$OLD_VMID".conf
mv /etc/pve/"$VM_TYPE"/"$OLD_VMID".conf /etc/pve/"$VM_TYPE"/"$NEW_VMID".conf

echo Done!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!