Shutdown the VM. then full Cloning when put the new vmid and then remove the origin VM con the VMID that you need para the new VM backup.
You should not be backing up multiple independent Proxmox hosts into the same PBS datastore. You should create a second datastore in PBS, it could even point to the same underlying filesystem, just different top level directory.You set up PBS sync to a remote machine, this same machine either already has a VM backup with the same ID, or will eventually backup a VM with the same ID. This will cause these backups to be grouped as if they are the same machine, this causes confusion and issues with purging if the rules are set on the PBS instance.
For me, I have a cloud server running PBS that already had backups under VMID 100 and 101. Once adding a sync job from a remote host, it successfully synced, but in the same grouping as the VM ID that was already there. This appears as extra VM backups.
Duh! Can't believe I didn't realize my mistake. Thank youYou should not be backing up multiple independent Proxmox hosts into the same PBS datastore. You should create a second datastore in PBS, it could even point to the same underlying filesystem, just different top level directory.
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
You should create a second datastore in PBS, it could even point to the same underlying filesystem, just different top level directory.
That would not be pointing to different top level directory.When I tried to create a second datastore and point it to /datastore/backups, it won't allow me with file exist error.
Or you can do this ^ https://castinganet.net/posts/PBS-Namespaces/You don't necessarily need multiple stores. Namespaces exist if all you want is to separate host VMs on PBS.
> just different top level directory.
That would not be pointing to different top level directory.
You need to reshuffle your config. Create /datastore/backups/pbs1 , move context of /datastore/backups to /datastore/backups/pbs1, repoint your PBS storage pool to /datastore/backups/pbs1.
Then create /datastore/backups/pbs2, point your second pbs storage pool to it.
Good luck
I made changes and created two datastores on PBS: osvms and mvms. But when I tried to add them under cluster storage, I got 'cannot find /datastore/osvms' even I gave correct admin right to the account. Do you know what I did wrong? I will try it again tomorrow.> just different top level directory.
That would not be pointing to different top level directory.
You need to reshuffle your config. Create /datastore/backups/pbs1 , move context of /datastore/backups to /datastore/backups/pbs1, repoint your PBS storage pool to /datastore/backups/pbs1.
Then create /datastore/backups/pbs2, point your second pbs storage pool to it.
Seeing as I got a ping on this, I figured more info would be helpful: It is worth noting that when you do separation by namespacing, you continue to draw maximum use of deduplication (which only runs on a per-datastore level).You don't necessarily need multiple stores. Namespaces exist if all you want is to separate host VMs on PBS.
#!/bin/bash
echo "Enter the old VMID to change:"
read oldVMID
case $oldVMID in
''|*[!0-9]*)
echo "Bad input. Exiting."
exit 1;;
*)
echo "Old VMID: $oldVMID";;
esac
echo
echo "Enter the new VMID:"
read newVMID
case $newVMID in
''|*[!0-9]*)
echo "Bad input. Exiting."
exit 1;;
*)
# Check if new VMID is less than 100
if [ "$newVMID" -lt 100 ]; then
echo "New VMID must be 100 or greater. Exiting."
exit 1
fi
echo "New VMID: $newVMID";;
esac
echo
# Ensure old and new VMIDs are not the same
if [ "$oldVMID" -eq "$newVMID" ]; then
echo "Old VMID and new VMID are the same. Exiting."
exit 1
fi
# Check if new VMID already exists
if [ -f "/etc/pve/qemu-server/${newVMID}.conf" ]; then
echo "A VM with VMID $newVMID already exists. Exiting."
exit 1
fi
# Find ZFS datasets associated with the old VMID (including base images)
zfsDatasets=$(zfs list -H -o name | grep -E "/(vm|base)-${oldVMID}-disk-")
if [ -z "$zfsDatasets" ]; then
echo "No ZFS datasets found for VMID $oldVMID. Exiting."
exit 1
else
echo "Found ZFS datasets:"
echo "$zfsDatasets"
fi
echo
# Rename ZFS datasets to new VMID
for dataset in $zfsDatasets; do
newDataset=$(echo $dataset | sed "s/\(\/\(vm\|base\)-\)${oldVMID}-disk-/\1${newVMID}-disk-/")
echo "Renaming $dataset to $newDataset"
zfs rename $dataset $newDataset
done
# Update and rename the VM configuration file
if [ -f "/etc/pve/qemu-server/${oldVMID}.conf" ]; then
# Replace old VMID with new VMID in disk definitions
sed -i "s/\([=/]\(vm\|base\)-\)${oldVMID}\(-disk-\)/\1${newVMID}\3/g" "/etc/pve/qemu-server/${oldVMID}.conf"
# Rename the config file
mv "/etc/pve/qemu-server/${oldVMID}.conf" "/etc/pve/qemu-server/${newVMID}.conf"
echo "VM configuration updated and renamed."
else
echo "Configuration file for VMID $oldVMID not found. Exiting."
exit 1
fi
# Update configurations of other VMs that reference the base image
echo "Checking for VMs linked to base images of VMID $oldVMID..."
linkedVMs=$(grep -rl "base-${oldVMID}-disk-" /etc/pve/qemu-server/ | grep -v "${oldVMID}.conf\|${newVMID}.conf")
if [ -n "$linkedVMs" ]; then
echo "Found VMs linked to the base image:"
echo "$linkedVMs"
echo
for vmConfig in $linkedVMs; do
echo "Updating $vmConfig..."
sed -i "s/\([=/]\(base\)-\)${oldVMID}\(-disk-\)/\1${newVMID}\3/g" "$vmConfig"
done
echo "Linked VM configurations updated."
else
echo "No VMs linked to the base image of VMID $oldVMID."
fi
echo
echo "Operation completed successfully!"
if anyone wants to change id on VM template:
Code:#!/bin/bash echo "Enter the old VMID to change:" read oldVMID case $oldVMID in ''|*[!0-9]*) echo "Bad input. Exiting." exit 1;; *) echo "Old VMID: $oldVMID";; esac echo echo "Enter the new VMID:" read newVMID case $newVMID in ''|*[!0-9]*) echo "Bad input. Exiting." exit 1;; *) # Check if new VMID is less than 100 if [ "$newVMID" -lt 100 ]; then echo "New VMID must be 100 or greater. Exiting." exit 1 fi echo "New VMID: $newVMID";; esac echo # Ensure old and new VMIDs are not the same if [ "$oldVMID" -eq "$newVMID" ]; then echo "Old VMID and new VMID are the same. Exiting." exit 1 fi # Check if new VMID already exists if [ -f "/etc/pve/qemu-server/${newVMID}.conf" ]; then echo "A VM with VMID $newVMID already exists. Exiting." exit 1 fi # Find ZFS datasets associated with the old VMID (including base images) zfsDatasets=$(zfs list -H -o name | grep -E "/(vm|base)-${oldVMID}-disk-") if [ -z "$zfsDatasets" ]; then echo "No ZFS datasets found for VMID $oldVMID. Exiting." exit 1 else echo "Found ZFS datasets:" echo "$zfsDatasets" fi echo # Rename ZFS datasets to new VMID for dataset in $zfsDatasets; do newDataset=$(echo $dataset | sed "s/\(\/\(vm\|base\)-\)${oldVMID}-disk-/\1${newVMID}-disk-/") echo "Renaming $dataset to $newDataset" zfs rename $dataset $newDataset done # Update and rename the VM configuration file if [ -f "/etc/pve/qemu-server/${oldVMID}.conf" ]; then # Replace old VMID with new VMID in disk definitions sed -i "s/\([=/]\(vm\|base\)-\)${oldVMID}\(-disk-\)/\1${newVMID}\3/g" "/etc/pve/qemu-server/${oldVMID}.conf" # Rename the config file mv "/etc/pve/qemu-server/${oldVMID}.conf" "/etc/pve/qemu-server/${newVMID}.conf" echo "VM configuration updated and renamed." else echo "Configuration file for VMID $oldVMID not found. Exiting." exit 1 fi # Update configurations of other VMs that reference the base image echo "Checking for VMs linked to base images of VMID $oldVMID..." linkedVMs=$(grep -rl "base-${oldVMID}-disk-" /etc/pve/qemu-server/ | grep -v "${oldVMID}.conf\|${newVMID}.conf") if [ -n "$linkedVMs" ]; then echo "Found VMs linked to the base image:" echo "$linkedVMs" echo for vmConfig in $linkedVMs; do echo "Updating $vmConfig..." sed -i "s/\([=/]\(base\)-\)${oldVMID}\(-disk-\)/\1${newVMID}\3/g" "$vmConfig" done echo "Linked VM configurations updated." else echo "No VMs linked to the base image of VMID $oldVMID." fi echo echo "Operation completed successfully!"
What's the benefit of using a Script instead of cloning the vm to the new wanted vmid and removing the old vm afterwards?if anyone wants to change id on VM template:
*skript*
Extensions don't matter in Linux, the shebang line and the execution bit matters.hi but is it a script? what extensions should it have?