Manually move VM to a different Node

CZappe

Member
Jan 25, 2021
33
9
13
Santa Fe, NM, USA
www.bti-usa.com
I am working in an environment with multiple, unclustered, nodes and have one node that needs to go down for maintenance. I'd like to migrate the active VMs and containers on this node to a different node for continuity of access. I know that without an established cluster, this has been described as a "manual" process, though I'm not sure what this process entails or how to close a VM on one node and wake it up on another one. All our VM disks are currently hosted on a separate NAS, connected via NFS, which I imagine should simplify things somewhat.

What steps would I need to take to cleanly and successfully migrate a VM to a different node?
 
If you can connect the NFS Share from Node X to Y, migrate the storage to it. If this is finished, stop the VM, move the VM Config from the old node to the new one and start it again. But you should move the config or copy and then delete on the old one to prevent a start of it.

Thats all.
 
All of our nodes have access to the same pool of NFS drives under Datacenter/Storage in the GUI so nodes X and Y can both see the storage location for these VMs. How does one go about moving the config files?
 
Thank you, kindly! I was able to transfer the config files to the new node and was impressed to see the VMs populate almost immediately. Taking your tip to remove the config files from the former node to prevent any collisions if both instances were to boot at the same time, here are the steps I took, in case a step-by-step guide helps anybody else. I'll refer to the old and new nodes as Node A and Node B, respectively.
  1. Shutdown the VM and confirm its status is "Stopped" in the Proxmox web GUI
  2. Log into Node A CLI, browse to the config file directory and rename the config file. The VM will disappear from the web GUI.
    cd /etc/pve/qemu-server/
    mv 123.conf 123.conf.bak
  3. Create a directory to store the config files on a shared mount point, accessible by Node B (useful as I was migrating several VMs at once)
    mkdir /mnt/pve/[shared]/pve-conf-files
  4. Copy the config file from Node A to the config folder on the shared drive
    cp 123.conf.bak /mnt/pve/[shared]/pve-conf-files
  5. Log into Node B CLI, browse to the local config file directory, and copy the config files over from the shared drive
    cd /etc/pve/qemu-servers
    cp /mnt/pve/[shared]/pve-conf-files/123.conf.bak .
  6. Rename the copied config file back to its original name
    mv 123.conf.bak 123.conf
  7. The VM will now appear in the web GUI for Node B and may be started normally once it has loaded.
This wasn't that difficult, in the end, though I'm surprised that all I needed to do was move a configuration file that was only about 10-20 lines of text in many cases. This method didn't copy over backup settings but those are easy to replicate if I make sure I've documented things before taking a VM offline.
 
Thank you, kindly! I was able to transfer the config files to the new node and was impressed to see the VMs populate almost immediately. Taking your tip to remove the config files from the former node to prevent any collisions if both instances were to boot at the same time, here are the steps I took, in case a step-by-step guide helps anybody else. I'll refer to the old and new nodes as Node A and Node B, respectively.
  1. Shutdown the VM and confirm its status is "Stopped" in the Proxmox web GUI
  2. Log into Node A CLI, browse to the config file directory and rename the config file. The VM will disappear from the web GUI.
    cd /etc/pve/qemu-server/
    mv 123.conf 123.conf.bak
  3. Create a directory to store the config files on a shared mount point, accessible by Node B (useful as I was migrating several VMs at once)
    mkdir /mnt/pve/[shared]/pve-conf-files
  4. Copy the config file from Node A to the config folder on the shared drive
    cp 123.conf.bak /mnt/pve/[shared]/pve-conf-files
  5. Log into Node B CLI, browse to the local config file directory, and copy the config files over from the shared drive
    cd /etc/pve/qemu-servers
    cp /mnt/pve/[shared]/pve-conf-files/123.conf.bak .
  6. Rename the copied config file back to its original name
    mv 123.conf.bak 123.conf
  7. The VM will now appear in the web GUI for Node B and may be started normally once it has loaded.
This wasn't that difficult, in the end, though I'm surprised that all I needed to do was move a configuration file that was only about 10-20 lines of text in many cases. This method didn't copy over backup settings but those are easy to replicate if I make sure I've documented things before taking a VM offline.

i joined this forum just to say thank you!
For whatever reason, my 3 node cluster was trying to migrate a VM set for HA and it failed. No idea what went wrong, but after a few restarts i finally got an error message saying the VM disk was missing. Turns out, it was still on the previous node. Your instructions really saved me. The VM was our HaProxy o_O - worst thing to stay down for us
 
  • Like
Reactions: larsen
I'm adding a quick reply to this thread, as the original poster, to provide a one-line, node-to-node method for moving config files over a network connection via rsync, without the need for time intesive copying and renaming of files along the way. You may need to install rsync via apt to use this method:

root@nodeA# rsync -vp --remove-source-files /etc/pve/qemu-server/123.conf root@nodeB-fqdn:/etc/pve/qemu-server/

The --remove-source-files option will "move" the conf file from Node A, rather than copying it, when the transfer successfully completes. This can be omitted if you want the security of keeping the original file in place. You can then rename the file after transfer (e.g. root@nodeA# mv /etc/pve/qemu-server/123.conf /etc/pve/qemu-server/13.conf.bak) to prevent it from being accessed from two nodes at once.
 
Last edited:
wait a minute !!

check this

Cluster nodes = pve1, pve2, pve3
Storage = CEPH
vm machine id = 100

When pve1 node goes down and vm[100] report "error 595" when You are trying migrate ar do something with, all what You need to do, just copy conf file for vm id 100

go to pve2 or pve3 shell

$ cp /etc/pve/nodes/pve1/qemu-server/100.conf /etc/pve/nodes/pve2/qemu-server/100.conf

ENJOY !!
 
When pve1 node goes down and vm[100] report "error 595" when You are trying migrate ar do something with, all what You need to do, just copy conf file for vm id 100

go to pve2 or pve3 shell

$ cp /etc/pve/nodes/pve1/qemu-server/100.conf /etc/pve/nodes/pve2/qemu-server/100.conf

ENJOY !!
Moving the correct operating:

Code:
$ mv /etc/pve/nodes/pve1/qemu-server/100.conf  /etc/pve/nodes/pve2/qemu-server/100.conf
 
  • Like
Reactions: marcin_nitka

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!