Can't live migrate VMs which are located on FC-SAN (LVM thick) to different target

devaux

Active Member
Feb 3, 2024
165
37
28
Is there a way (live or not live) to migrate virtual disks to another storage?
Background: I have a cluster with 4 nodes and 2 SAN. Two of them share one FC-SAN each and i want to move the virtual disks from one SAN to the other.
Is this possible?
Pretty sure I've seen a live-migration dialogue in the Web-GUI where i could change the target storage before migrating. But not here. So i always get the error message "storage 'san01' is not available on node 'node04' (500)" - where i only have the san02-lvm
Is this a limitation of LVM-thick?

Migrating the Disk with the "move storage" Disk Action over a NFS-Share (all 4 nodes have access) works, but takes much more time.
 
Found out, that you don't get the "target" option in the live migration scenario as long the source disks are on a Storage where "shared" is enabled. As soon as i uncheck "shared", i am able to chose a different target for live migration.

Is this intentional (to avoid errors) or a bug?

1722864855802.png
 
Last edited:
This is working as intended, as "Shared" would mean "the exact same storage is available on multiple nodes", so it wouldn't have to move storage. So if it is not the same storage, but can be accessed with the same name/path on multiple nodes, it is not shared and the option should be off. Local storage is an example of non-shared storage, the same path is there on all nodes, but the disks on there are not the same, so migrations would need to happen.
 
But why is it not possible to migrate a VM from a "shared" storage to a local storage of another node?
I mean it's possible, but i have to uncheck the share option in the storage configuration first.
 
Because it is expected that shared storage is available on both hosts so doesn't need migrating, so it doesn't give you the option.
You can also move between shared storage and local storage though, but only if both are available on the same node, and then you use the disk-migrate options in the hardware-tab of the VM.

So either don't set it to shared if it is not shared storage, or add the REAL shared storage to both nodes, migrate the disks to there first, then migrate the VM, keeping the shared storage in the same spot.
 
Doesn't make sense for me.
Shared storage doesn't need to be accessible on all of your nodes. That's why you can define for each storage at which node it should be seen as available.

Example:
- node1 and node2 are directly connected to a FC-SAN (san1) => so i set san1 shared and available for node1 and node2
- node3 and node4 are directly connected to another FC-SAN (san2) => so i set san2 shared and available for node3 and node4
- But node1 and node2 don't have access to san2 and node3 and node4 don't have acccess to san1. They can ony connect to each other over the network

With this setup i am NOT able to move VMs from node1/2 to node3/4 and vice versa if their disks are on san1/2
But if you uncheck "shared" on the SAN where the disks of the source VM are, you will be able to move the VM/disks to a node from the other SAN.
 
A quick copy from https://pve.proxmox.com/wiki/Storage:

One major benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime, as all nodes in the cluster have direct access to VM disk images.
(...)
Sharing storage configuration makes perfect sense for shared storage, because the same “shared” storage is accessible from all nodes
So it seems like it is at least not common practice to have shared storage not available for all nodes.

I do get your point/situation, mind you, but nothing I can change about it, other then what you've already figured out by toggling the option, but having that option off on storage that IS shared can cause issues too (by having it try to move disks from and to the same location.
 
Doesn't make sense for me.
Shared storage doesn't need to be accessible on all of your nodes. That's why you can define for each storage at which node it should be seen as available.

Example:
- node1 and node2 are directly connected to a FC-SAN (san1) => so i set san1 shared and available for node1 and node2
- node3 and node4 are directly connected to another FC-SAN (san2) => so i set san2 shared and available for node3 and node4
- But node1 and node2 don't have access to san2 and node3 and node4 don't have acccess to san1. They can ony connect to each other over the network

With this setup i am NOT able to move VMs from node1/2 to node3/4 and vice versa if their disks are on san1/2
But if you uncheck "shared" on the SAN where the disks of the source VM are, you will be able to move the VM/disks to a node from the other SAN.
The problem here is not to increase the complexity of the GUI to infinity.
You would then have to check each time whether the respective datastore is available on the node. If I have large clusters with many datastores, a lot of scanning would have to be done before the wizard can start.
That's why the easy way was chosen: if it's a shared storage, you assume that the datastore is available everywhere in the cluster or can be made available. On a host, I can migrate from shared to local.

If you want to perform migrations between different hosts and different shared storages, you have to do this via the CLI.
 
Yes, the only problem i encountered so far is, that when i move VMs from node1 to node2 and doesn't have shared enabled, the VM-disks were copied again, instead of just moving the vm from node 1 to node 2 and keep the disk (which is on the same san)

So i am asking myself why it's possible to set the availability of each storage per node but on the other hand, "shared" assumes that it is available for every node.

I'm fine with temporarily unchecking the "shared" box when i have to move VMs between node1/2 and node3/4 but asking myself why it's not automatically possible.
Is it because it is not common or because I could expect problems.
 
Yes, the only problem i encountered so far is, that when i move VMs from node1 to node2 and doesn't have shared enabled, the VM-disks were copied again, instead of just moving the vm from node 1 to node 2 and keep the disk (which is on the same san)

So i am asking myself why it's possible to set the availability of each storage per node but on the other hand, "shared" assumes that it is available for every node.

I'm fine with temporarily unchecking the "shared" box when i have to move VMs between node1/2 and node3/4 but asking myself why it's not automatically possible.
Is it because it is not common or because I could expect problems.
Quite simply because it is not usual.

If you look at a cluster design in general, whether virtualization or other workload. You generally configure all nodes so that they can all reach the shared memory. With many cluster types, it is still common to have a quorum disk, which must of course be accessible from every node.
Your scenario is totally unusual in operation and is only common in migration scenarios. You can temporarily remove the shared checkmark or perform the migration via CLI.
 
Quite simply because it is not usual.

If you look at a cluster design in general, whether virtualization or other workload. You generally configure all nodes so that they can all reach the shared memory. With many cluster types, it is still common to have a quorum disk, which must of course be accessible from every node.
Your scenario is totally unusual in operation and is only common in migration scenarios. You can temporarily remove the shared checkmark or perform the migration via CLI.

ok, this makes sense. Thanks!
I am totally fine with manually disabling the option as long as i know that i won't break anything. And yes, it's used only for migrating.

How would look the migration command for changing the target-device?
I've tried
Code:
qm migrate <VM-ID> <TARGET-NODE> --online --with-local-disks --targetstorage local-lvm
But it won't migrate to a different storage and keeps using the "shared" storage of the SAN.
 
How would look the migration command for changing the target-device?
I've tried
Code:
qm migrate <VM-ID> <TARGET-NODE> --online --with-local-disks --targetstorage local-lvm
But it won't migrate to a different storage and keeps using the "shared" storage of the SAN.
I have not yet tested this myself, as I have not built such a setup anywhere.
I would have thought that it should work with qm migrate.
With qm remote-migrate it works when migrating between two clusters.
 
Fun fact: "qm migrate" behaves the same as i would do the migration in the GUI. It ignores "--targetstorage" as long as the source is on a "shared storage". I have to uncheck the "shared" option in the storage - else it won't migrate to a remote storage or it will stay on the same shared storage.

I guess qm remote-migrate would do, but since you have to generate an API-token, it's easier just to uncheck the "shared" option in the GUI. But it's a cool option if you want to migrate VMs into a new cluster. Pretty sure i can use it in the near future.
 
Last edited:
  • Like
Reactions: Falk R.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!