Move VM to another node with different storage name

Florius

Well-Known Member
Jul 2, 2017
35
9
48
58
Hi,

I have a new server, bigger, better, more disk space.
I have a cluster with the old and new server.
Now I have to migrate the data of 1 VM on disk called "storage" to the new server where the disk is called "DATA".

How would I do this? If I try the normal right click option, it says "storage" does not exist on the new host.
So how can I tell the VM to move the data from "storage" to "DATA"?

Thanks!

EDIT: I wanted to add, making a backup and moving it then is no option, it's 4TB, which I have no space for.
EDIT2: It's from a ZFS store to a LVM thin pool... If it makes a difference.
 
You can do a migration with local disks on the CLI, see 'qm help migrate'.
 
Hopefully this helps future googlers like me, since the qm help migrate man page is a bit lacking

I had to migrate VMs in a cluster between 2 nodes with differently named storage. This is impossible to reconcile via the GUI (at time of writing - PVE 6.2) because it will only allow adding cluster storage if the same storage name exists on both nodes.

I used this command to migrate a VM from MYNODE1 (STEVE) to MYNODE2 (VOULA) with different source and destination storage names.
I use a dedicated network but if you only have one network that can be omitted. I had the VM turned off but it can be done while running with --online I think (haven't tested).

Bash:
qm migrate 101 VOULA -migration_network 172.16.10.0/24 -targetstorage rpool:pool1 -with-local-disks

1603947674109.png
 
Last edited:
I know this thread is super old, but Google brought me here, so I thought I would confirm that this DOES work for online / live migrations as well. Seems strange that the UI still doesn't allow this in PVE 8.2.2, but glad this is at least possible at the CLI.

qm migrate <vmId> <destHost> -migration_network <networkCidr> -targetstorage <targetStorage> -with-local-disks --online
 
If you do a migration with local storage, the migration dialog should show a drop down field for the target storage.
1717672655776.png
 
  • Like
Reactions: sublimnl
Correct you are. I must have missed that second dropdown yesterday. I just tested it again and it works as you described.
 
  • Like
Reactions: aaron
If you do a migration with local storage, the migration dialog should show a drop down field for the target storage.
View attachment 69353

This only works for online migration from what i've seen. Is the command above the only option for offline migrations? I've always found it odd why this would be the case.

Also, is there a way to migrate a VM while also moving different volumes to different storage targets? For example we host our OS volumes on different storage targets (with different block sizes/settings) to our DB volumes. So far the online option (at least in the UI) seems to be to migrate every volume at once, then 'move storage' once migration to that single storage target is done.

I'm OK with doing this via CLI, i just didn't see this was an option until finding this thread just now.
 
I also noticed that with OFFLINE migrations specifically, it doesn't show the option mentioned - I'm okay with moving it via CLI though, but coming from ESXi to Proxmox, this was a interesting nuance for me. I do appreciate how well Proxmox works overall though. :)
 
  • Like
Reactions: rafal9ck
It seems that this option to change storage name, when migrating in the GUI only works for VMs and not CTs, is that the case?
Can I use the shell to do this?
 
Last edited:
Is the implication that if using local storage, the names should be identical on all nodes in the cluster? Yet they are all managed in a centralized storage.cfg file? How is that even possible? I assumed quite the opposite: I should name the storage to be unique in the cluster.

If this is true, is it possible to "rename" a storage (even manually by editing the storage.cfg file)???
 
When I import a VM from ESXi, I can send different disks to different storage:
1745159108042.png


So I consider this broken that I can't migrate from one host to another within a cluster, and choose different a different storage per disk.... extremely broken!
My screenshot above is just a VM with EFI disk and SCSI disk, as I've already deleted my VMs from ESXi that had the multiple SCSI disks per VM, but you get the idea.
So for now, it looks like I use the CLI to jam all disks onto the biggest/fastest storage on the destination host, and then when it's all copied, I move the disks to where they need to be. *sigh*.
I'm liking Proxmox for the most part, but some things like this are just brain dead.