How to move VM to another storage

zzz09700

Active Member
Apr 15, 2024
137
39
28
Ok we are in the process of transferring a ESXi server to Proxmox.
All VMs has been transferred/transformed to PVE format and are now running based on an array of SATA SSDs.
PCIe passthrough is up and running so we are kinda comfortable with the PVE setup now.

But here comes another problem: The SATA SSDs came from our old spare parts (who still uses SATA SSDs these days lol) and since initial testing one PVE was good the PVE server will soon get its 40G NIC and network storage, at which point most of the VMs should be moved to network storage.

With ESXi this is simple since there's the "import/register VM from storage" function which basically allows us to ssh into the server, move the VM's files to network storage, hook ESXi up to the networkstorage and re-register the VM. I don't see similiar functions within PVE though, so what's the official way of moving VM to another storage?
 
use "move disk" button in GUI to relocate the disk to the new storage.
Is that moving the whole VM or just VM's disk?
Can we remove unused SATA SSDs from the server (I mean, unplug those disks from MB and send them back to spare parts warehouse) after VM relocation?
 
It appears with PVE the metainfo of VMs are somewhere within PVE's root partition and only the disks are stored on the storage we select during VM creation, it that correct?

So if for some reason PVE went kaput, all VM metainfo are gone with it?
 
If you want to move the VM you should migrate it.
The OP asked about moving VM's storage location - he is not referring to moving nodes

It appears with PVE the metainfo of VMs are somewhere within PVE's root partition
Usually VMs have disks & a conf file (usually located at /etc/pve/qemu-server/VMID.conf) - but that's it. PVE itself has hosted metadada within itself.

If you're host drive itself has nothing to do with the SATAs you wish to remove - then after moving the VMs from original storage to the new storage - you should be good to remove the old storage from PVE & then physically remove the drives.
 
  • Like
Reactions: zzz09700
Thanks gfngfn257, that clears things up.

So we need to keep back up those VMID.conf in case anything goes wrong with PVE and a system reinstall is required.
 
Is there any way to quickly re-register a VM to PVE after a system reinstall? We definitely need backup routes for broken upgrades, like many is facing right now with the 8.2 release.
 
Last edited:
  • Like
Reactions: Kingneutron
Ideally, you would have a cluster for availability and redundancy.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
With all kinds of PCIe passthrough stuff around it's impossible to setup HA related stuff. But we still need a quick way out of trouble, like a full system reinstall to revert to the last stable version and get all VMs up and running ASAP.
 
On a side note about PVE 8.2:
It looks like kernel 6.8 is wreaking havoc left and right so I'll call off any planned upgrades to Ubuntu 24.04 LTS on our site.

Don't know if I should call this lucky.
 
Is there any way to quickly re-register a VM to PVE after a system reinstall?
You need to create full proper & restorable backups within PVE. You can also use PBS - read up on this.

With backups created within PVE, vzdump will create a full set of files (usually 3 of which one will be all the vm data itself *.zst, one *.log file & one *.notes file) . These files should be enough to fully restore a VM. I have done so on many occasions.

So we need to keep back up those VMID.conf in case anything goes wrong with PVE and a system reinstall is required.
These conf files are not backups in any way for VMs. However they are useful to reconstruct a VM if necessary.
 
  • Like
Reactions: Dubyah
You need to create full proper & restorable backups within PVE. You can also use PBS - read up on this.
I'm reading through PBS but it's seems like it's for VM backup not the host. For VM disks network storage has its own backup solution in place so I'm not too worried.

Basically what I'm worried about PVE itself - if PVE gets broken by an update or something, is there any way of restoring it from a backup?

The bottom line here is we can make copies of VMID.conf a d have some automated scripts reconstructing VMs/attaching back disks after a PVE reinstall. Kinda crude but should work.
 
Last edited:
  • Like
Reactions: Kingneutron
I'm reading through PBS but it's seems like it's for VM backup not the host.
Correct. However you can also backup any files/configs/directories you want of the host.

PVE does not have any inhouse complete backup solution for the host itself. You can however do you're own - in any way you wish. (I personally make a complete disk image of my host PVE OS disk from time to time). Search these forums - you'll find enough on this topic.

As a simple rule-of-thumb, make as little changes as possible on the host PVE (& document any you do have to make), put everything else in LXCs & VMs & back these up properly. So in any event - you can always reinstall & recreate your PVE environment easily - & then restore all LXCs & VMs from backups. Its not all that hard.

we can make copies of VMID.conf
As I already pointed out, that's not the way to make backups/restores of VMs. In PVE you can make a backup of the VM itself - without the VM's disk(s) if you wish, as you say you have your own copies of the disks itself - then you can easily restore the VM's complete config & setup & then restore its disks separately - if you so wish.
 
  • Like
Reactions: Kingneutron
I'm reading through PBS but it's seems like it's for VM backup not the host. For VM disks network storage has its own backup solution in place so I'm not too worried.

Basically what I'm worried about PVE itself - if PVE gets broken by an update or something, is there any way of restoring it from a backup?

The bottom line here is we can make copies of VMID.conf a d have some automated scripts reconstructing VMs/attaching back disks after a PVE reinstall. Kinda crude but should work.

I have homegrown scripts to bare-metal backup/restore a standalone PVE host LVM+ext4 root (have not tried this with cluster)

https://github.com/kneutron/ansitest/tree/master/proxmox

Feel free to test and provide feedback. My script still more-or-less requires the Proxmox ISO to do a fresh install / recreate the LVM setup, and then restores ext4 root filesystem over that (XFS root will probably work fine as well, but as yet untested.) Advantage is that fsarchiver does a "live" backup of root without having to down the server and reboot into a clonezilla-type environment.


You could also look into Veeam backup agent for Linux, looks promising.

https://www.youtube.com/watch?v=g9J-mmoCLTs
 
  • Like
Reactions: zzz09700
But here comes another problem: The SATA SSDs came from our old spare parts (who still uses SATA SSDs these days lol) and since initial testing one PVE was good the PVE server will soon get its 40G NIC and network storage, at which point most of the VMs should be moved to network storage.

With ESXi this is simple since there's the "import/register VM from storage" function which basically allows us to ssh into the server, move the VM's files to network storage, hook ESXi up to the networkstorage and re-register the VM. I don't see similiar functions within PVE though, so what's the official way of moving VM to another storage?

If you need a way to mass-migrate VM disk storage without having to use the GUI, I have a script for that.

https://github.com/kneutron/ansitest/blob/master/proxmox/proxmox-migrate-disk-storage.sh

If the storage is defined in the GUI, edit the script for source/destination storage and feel free to provide feedback.
It's tested in my homelab for ZFS nvme <--> nvme, but not for network-based storage
 
  • Like
Reactions: zzz09700
When you create a new VM using the VM creation (for lack of a better word) wizard, you will first be asked to configure a disk but then later be asked to choose storage. In my case the only option for disk was 'local' but when I continued to the penultimate tab I was asked to pick the type of storage. My zfs mount showed up for storage but not for disk. This kinda makes sense because I wasn't using a network disk.
 
If you are learning about Proxmox VE, you should definitely set aside some time and also learn about the backup server. It's a very nice design that closely integrates with VE. And it has a powerful command line tool that you can use to backup things other than just containers and VMs. It might or might not handle all of your backup needs, but it can absolutely be part of the story.

I find PBS easy to use, and I use it to backup various other devices that are completely unrelated to Proxmox in addition to backing up the containers, VMs and the host itself. Recovering the host from a backup would be a little tedious, as you'll encounter a bit of a boot strapping problem; but it's doable. But most of the time, I find I don't need a full system restore. And if I only need to restore a subset of files, then that's really easy.

Also, if you have a cluster with multiple nodes, you are much less likely to ever have to do a full bootstrap from nothing. You simply bring up another node, adopt it into the cluster, and restore the VMs from backups. And that's assuming you didn't already replicate your VMs anyway, which would make this even easier.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!