Migration Doesn't Always Show Storage Options

Korishan

New Member
Jun 16, 2024
11
0
1
I'm migrating from one pve node to another. I start the migration and it successfully transfers. Then after it's transferred I have to move the storage from one node to the other

Node-A Storage is: Storage_PVE_1
Node-B Storage is: Storage_PVE_2

So after I've migrated, I have to do a Volume Action -> Move Storage from Storage_PVE_1 to Storage_PVE_2

But this isn't always the case.
If there's an error on space available, I get the option during the migration to move to the other storage, Target Storage.
1746732610918.png
1746732576449.png

Why doesn't the option to select storage during migration always show up??? Is there a way to make the Target Storage always show up?
 
please post the storage.cfg contents and the VM configs for both VMs
 
storage.cfg:
Code:
zfspool: storage1
        disable
        pool raid_storage
        content rootdir,images
        mountpoint /raid/storage1
        sparse 0

dir: storage2
        path /raid/storage1/storage2
        content images,rootdir,iso,vztmpl
        prune-backups keep-all=1
        shared 0

dir: raid_storage
        path /raid/storage1
        content images,iso,vztmpl,backup,rootdir,snippets
        prune-backups keep-all=1
        shared 0

dir: hp2_proxmox
        path /pools/pool_hp2_1/proxmox
        content snippets,rootdir,vztmpl,iso,images,backup
        prune-backups keep-all=1
        shared 1

2011.conf (pihole):
Code:
agent: 1
boot: order=scsi0;ide2;net0
cores: 2
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=9.2.0,ctime=1743868524
name: pihole-11
net0: virtio=BC:24:11:48:A2:FC,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: storage2:2011/vm-2011-disk-0.qcow2,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=33d8e838-7187-4bb5-967c-8993828bedfd
sockets: 1
vmgenid: 263ab691-3596-4f9e-bd96-1f0f2cf8d43c

101.conf (Truenas):
Code:
gent: 1
boot: order=scsi0;ide2;net0
cores: 2
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 12288
name: TrueNAS
net0: virtio=BE:BC:D7:51:A2:C2,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsi0: storage2:101/vm-101-disk-0.qcow2,size=32G
scsi1: storage2:101/vm-101-disk-1.qcow2,size=32G
scsi10: /dev/disk/by-id/wwn-0x5000c50085e8b5cf,size=3907018584K
scsi12: /dev/disk/by-id/wwn-0x5000c50085e895f3,size=3907018584K
scsi13: /dev/disk/by-id/wwn-0x5000c50085e828f7,size=3907018584K
scsi14: /dev/disk/by-id/scsi-35000c50085e90d73,size=3907018584K
scsi3: /dev/disk/by-id/ata-HGST_HDN726040ALE614_K7J1LGUL,size=3907018584K
scsi4: /dev/disk/by-id/wwn-0x5000c500c4c9db57,size=3907018584K
scsi5: /dev/disk/by-id/wwn-0x5000c500cf68bd87,size=3907018584K
scsihw: virtio-scsi-pci
smbios1: uuid=eb006c18-4cfd-4346-9088-5dc2eaf29085
sockets: 1
startup: order=2
vmgenid: 8b31d22b-05b8-43f1-b319-65a9d0f9f71a
 
the second guest has pass-through disks, so those need to be copied to a different storage.. are you sure the rest of your config is correct? because your initial post had different storage names.. in particular, if the disks of pihole are actually on hp2_proxmox which is marked as shared, that would explain why you don't get a target storage selection..
 
That was just an example. There was another one that gave the same options as 101.

Storage names in the OP were examples, not the actual names as I was posting quickly.

Thing is, if the size of the VM is bigger than the available size of the "default" storage of the node being migrated to, we get the option to select "where" to store the disks. But if it's not, we don't get that option at all.

I have several storage on Node 1 (pve-hp1) and if I try to migrate from Node 2 (pve-hp2) to Node 1, I don't get the option to select where to store the disks.

"storage2" is physically on pve-hp1. When I migrate a VM/CT to pve-hp2, the disks "stay" on storage2 instead of moving to "hp2_proxmox", which is physically on pve-hp2.

Looking at the GUI to get some more details, I noticed that pve-hp2 copies the storage names.
Code:
zfspool: storage1
        disable
        pool raid_storage
        content rootdir,images
        mountpoint /raid/storage1
        sparse 0

dir: storage2
        path /raid/storage1/storage2
        content images,rootdir,iso,vztmpl
        prune-backups keep-all=1
        shared 0

dir: raid_storage
        path /raid/storage1
        content images,iso,vztmpl,backup,rootdir,snippets
        prune-backups keep-all=1
        shared 0

dir: hp2_proxmox
        path /pools/pool_hp2_1/proxmox
        content snippets,rootdir,vztmpl,iso,images,backup
        prune-backups keep-all=1
        shared 1

Not only just that, but it stores them on the main drive, so I was running out of space. The folder "/raid" is a mount point for the raid zfs pools on pve-hp1, but that is not the case on pve-hp2. Those are stored under "/pools"

So this adds to the question why we can't choose what storage location to migrate to? And why does it just default to the same name it is migrating from?
 
if a storage is only available on a single node, you need to tell PVE. in particular if it's a directory storage, because else PVE will just create those directories on all nodes and use them as storage..
 
Storage is available on both nodes. When migrating from one node to another it should ask where to store the data, not assume it's the same storage name.
The storage folders/names were created on pve-hp2 automatically when it joined the cluster. I didn't know about this until recently when I started migrating. The two servers were in the same cluster for several months. I had no idea that each node in the cluster had to have the "exact" same storage layout for it to function correctly.
I've seen lots of videos where they had multiple nodes using different hardware. It's not obvious that you have to specifically limit storage to specific nodes.

I now see where the problem is. First pve automatically creates the folders for cross node storage without informing the user. It also will not allow the user to change storage during migration "unless" the migration data is too large to fit on that specific storage and then it shows the list of available storage options.
During migration it should always ask the user where they want to store the data, not assume it can only be in the same storage name location.

I see that I now have to go back to the storage options for the cluster and manually tell it that all the locations are Restricted for each storage option
1747045917649.png

And changing all the pve-hp1 storage options to be restricted to only that node makes the Target Storage option show up:
1747046030547.png

HOWEVER, it only shows up for live VMs running! If the VM/CT isn't running, I don't get that option. And actually, I get a completely different error message:
1747046238222.png

WHY?!?!?! Why are these restrictions here?? This makes no sense at all. We should be able to migrate from one node to the other regardless if it's running or not.
 
Last edited:
because migrating a running VM or a stopped one uses an entirely different migration mechanism.

you keep posting new issues without providing the necessarily details regarding the configuration.

if you want to get an explanation, please post
- the full storage.cfg
- the VM config

thanks!
 
  • Like
Reactions: Johannes S
because migrating a running VM or a stopped one uses an entirely different migration mechanism.

you keep posting new issues without providing the necessarily details regarding the configuration.

if you want to get an explanation, please post
- the full storage.cfg
- the VM config

thanks!
I already posted both configuration files. I don't think you want me post *all* of my VM configuration files. And the only thing not included in the storage.cfg is the local and local-lvm storages. I figured those were pretty generic. I don't use those storages for VM/CT's and I disable them after installation of PVE


I was able to migrate the pihole VM, but it was running. Oddly I could only do this after restricted the storage options in the Cluster Storage section. Before, it kept failing because the disk didn't exist:
1747048895307.png
After restricting storage to specific nodes, the option to select a storage location showed up and I was able to successfully migrate the VM. The previous image showing the storage options during migrate are taken from that migration.

You state that Live Migration and Offline Migrate are two different methods. Sure, I get that. But both migration options should still give the option of where to store the migrated files to. Not just "assume" a singular location and not confirm with the user where those files are going. Target Storage should always show up, even if it's automatically populated with a default location.
 
you didn't post the config of VM 2002

Before, it kept failing because the disk didn't exist:

that is because you wrongly marked the storage as shared, which I already told you way up in the thread.. a disk on a shared storage is not migrated at all, hence there is no target storage to select. your storage doesn't appear to actually be shared, so the config is wrong.
You state that Live Migration and Offline Migrate are two different methods. Sure, I get that. But both migration options should still give the option of where to store the migrated files to. Not just "assume" a singular location and not confirm with the user where those files are going. Target Storage should always show up, even if it's automatically populated with a default location.
it will show the option when it makes sense. if your configs/.. don't allow migrating with a target storage, you also won't be given the option.
 
  • Like
Reactions: Johannes S
you didn't post the config of VM 2002
All VMs have basically the same configuration and hardware setup. Only difference is the ID, Storage allotment and Memory/CPU allotment. Otherwise the same (excluding Truenas VM, of course)
The issues I am having are with all of the VM/CT's, not just specific ones.

that is because you wrongly marked the storage as shared, which I already told you way up in the thread.. a disk on a shared storage is not migrated at all, hence there is no target storage to select. your storage doesn't appear to actually be shared, so the config is wrong.
I have to disagree here. The only storage marked as shared was the "hp2_proxmox" storage. The others were not marked as shared. So with that, if "hp2_proxmox" storage was shared, it should have been the preferred storage location

it will show the option when it makes sense. if your configs/.. don't allow migrating with a target storage, you also won't be given the option.
This doesn't make sense. When I changed all my storage locations to restricted to that node, the storage options showed up. But they only show up for Live Migration.
Where would the setting be at that I need to enable Offline Migration?? This seems like more of a base configuration rather than a VM/CT configuration option. However, I don't see the option about migration during VM/CT creation or under any of the Options listed for them. Is this something that is configured manually in the cfg file?
1747051397319.png
Here's 2002.cfg, even tho it's practically identical to the other one I posted
Code:
agent: 1
boot: order=sata0;ide2;net0
cores: 4
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 6144
name: guac
net0: virtio=C2:C0:F6:4F:F7:11,bridge=vmbr0,firewall=1
numa: 1
ostype: l26
sata0: raid_storage:2002/vm-2002-disk-0.qcow2,size=64G,ssd=1
scsihw: virtio-scsi-pci
smbios1: uuid=ee34b83d-e8e6-4208-8a13-4a647898bbaa
sockets: 1
vmgenid: dab660dd-c0c6-4e13-abe2-48d49abdf2c1
 
I have to disagree here. The only storage marked as shared was the "hp2_proxmox" storage. The others were not marked as shared. So with that, if "hp2_proxmox" storage was shared, it should have been the preferred storage location
again - you marked a storage as shared that is not shared. you broke the invariants PVE operates on. a disk on a shared storage is not migrated at all, because it is supposed to already exist with the same contents on the target (else the storage is not shared). the reason the target storage option showed up after you restricted the "shared" storage to the source node is that the storage itself was no longer available on the target side, so the disk now needs to be migrated.

I think you have a fundamental misunderstanding how storage configuration works in PVE, maybe it would help to clear that up first?

you have two kinds of storages: local and shared. local storages have per-node content (e.g., a local directory, a local ZFS pool), shared storages have identical content across all nodes where they are enabled (e.g., an NFS share, or a distributed storage like Ceph). a shared storage is normally ignored for the purpose of migration. using local storage means the disks have to be migrated. usually you'd have a single storage.cfg entry for local storage on each node (e.g., you have a single local-lvm storage.cfg entry, even though the disk and VG on each node is separate). that way you can migrate without the need to "change" the volume. you shouldn't have X different local storages of the same type limited to a single node each, unless you want to have to specify target storages for every operation.

now back to your example:

your VM 2002 is
- not running
- has a single disk on a local storage limited to the source node (if I understand your changes correctly - you haven't posted the current storage.cfg after all)

this means it requires offline disk migration to another storage. this is not exposed on the UI (because it uses a different migration mechanism that is less flexible than the live one), which is why the UI tells you to start the VM to use live migration, or check if another node has the matching storage (offline migration to the same storage). you can do an offline migration using the API or CLI (provided your source and target storage are a valid combination for the given VM config, i.e., not all combinations support all disk formats or snapshots), which gives you the full range of options without any hand holding.
 
  • Like
Reactions: Johannes S and UdoB
again - you marked a storage as shared that is not shared. you broke the invariants PVE operates on. a disk on a shared storage is not migrated at all, because it is supposed to already exist with the same contents on the target (else the storage is not shared). the reason the target storage option showed up after you restricted the "shared" storage to the source node is that the storage itself was no longer available on the target side, so the disk now needs to be migrated.
The storage is not shared. I showed the config and screenshot of the storages and they are not shared. The VM/CTs were being migrated to the same folder name as on pve-hp1. They were not being stored anywhere else.

I think you have a fundamental misunderstanding how storage configuration works in PVE, maybe it would help to clear that up first?

you have two kinds of storages: local and shared. local storages have per-node content (e.g., a local directory, a local ZFS pool), shared storages have identical content across all nodes where they are enabled (e.g., an NFS share, or a distributed storage like Ceph). a shared storage is normally ignored for the purpose of migration. using local storage means the disks have to be migrated. usually you'd have a single storage.cfg entry for local storage on each node (e.g., you have a single local-lvm storage.cfg entry, even though the disk and VG on each node is separate). that way you can migrate without the need to "change" the volume. you shouldn't have X different local storages of the same type limited to a single node each, unless you want to have to specify target storages for every operation.

now back to your example:

your VM 2002 is
- not running
- has a single disk on a local storage limited to the source node (if I understand your changes correctly - you haven't posted the current storage.cfg after all)

Yes, I do not understand the way the migration storage happens. It does not make sense, it is not consistent. I have tried several different times and it didn't make sense either way.
The storages I am migrating from are not shared. The storage I was initially migrating to was shared, however I turned that off and I still couldn't migrate into it.
It wasn't until I changed the storage options, per storage container, to "Restricted" to each of the nodes did the option to select migration destination show up. And then it only showed up for VM/CTs that were running.

Original storage.cfg
Note: Storage2 is not shared
Code:
zfspool: storage1
        disable
        pool raid_storage
        content rootdir,images
        mountpoint /raid/storage1
        sparse 0

dir: storage2
        path /raid/storage1/storage2
        content images,rootdir,iso,vztmpl
        prune-backups keep-all=1
        shared 0

dir: raid_storage
        path /raid/storage1
        content images,iso,vztmpl,backup,rootdir,snippets
        prune-backups keep-all=1
        shared 0

dir: hp2_proxmox
        path /pools/pool_hp2_1/proxmox
        content snippets,rootdir,vztmpl,iso,images,backup
        prune-backups keep-all=1
        shared 1

Changed storage.cfg
Note: None of the storage is shared.
Code:
zfspool: storage1
        disable
        pool raid_storage
        content rootdir,images
        mountpoint /raid/storage1
        nodes pve-hp1
        sparse 0

dir: storage2
        path /raid/storage1/storage2
        content iso,images,vztmpl,rootdir
        nodes pve-hp1
        prune-backups keep-all=1
        shared 0

dir: raid_storage
        path /raid/storage1
        content rootdir,snippets,backup,images,iso,vztmpl
        nodes pve-hp1
        prune-backups keep-all=1
        shared 0

dir: hp2_proxmox
        path /pools/pool_hp2_1/proxmox
        content iso,vztmpl,rootdir,backup,snippets,images
        nodes pve-hp2
        prune-backups keep-all=1
        shared 0

The only difference now is that each storage has a Node Restriction added. Before the node restriction, these storages did not show up as a possible migration destination.


this means it requires offline disk migration to another storage. this is not exposed on the UI (because it uses a different migration mechanism that is less flexible than the live one), which is why the UI tells you to start the VM to use live migration, or check if another node has the matching storage (offline migration to the same storage). you can do an offline migration using the API or CLI (provided your source and target storage are a valid combination for the given VM config, i.e., not all combinations support all disk formats or snapshots), which gives you the full range of options without any hand holding.
And this is the bit that doesn't make sense. All the UI does is execute the CLI for the user. If it can do it for the Online Migration, why can't it be added to do the Offline Migration?? There's a log in showing it uses the CLI.

The UI could at least tell us these little details and say we have to do Offline Migration in the CLI if it's not going to be done in the UI.

The only reason I had made one of the storages set to Shared was because I thought it was required to allow a single share point but shared between nodes. IE: the disk(s) are stored on Node-A but the config resides on Node-B. It is launched from Node-B, pulls the required information across the network and ran on Node-B, but the bulk storage is on Node-A.
I still don't understand the purpose of Shared as even the non-shared storages showed up on my second node when it joined the Cluster. The folders are all created on PVE-HP2 root system and share the capacity of that drive.
 
And this is the bit that doesn't make sense. All the UI does is execute the CLI for the user. If it can do it for the Online Migration, why can't it be added to do the Offline Migration?? There's a log in showing it uses the CLI.
the UI doesn't execute the CLI, it does API calls. and yes, we don't put everything that the API can do on the UI (for UX reasons, or because some API might be "dangerous" for unwitting users, or because we consider it not stable enough yet for that kind of exposure). offline storage migration falls into the latter category - it was initially added as a sort of experiment, and then piece by piece it got more stable - it is now enabled on the UI as long as you don't change storage.
The only reason I had made one of the storages set to Shared was because I thought it was required to allow a single share point but shared between nodes. IE: the disk(s) are stored on Node-A but the config resides on Node-B. It is launched from Node-B, pulls the required information across the network and ran on Node-B, but the bulk storage is on Node-A.
I still don't understand the purpose of Shared as even the non-shared storages showed up on my second node when it joined the Cluster. The folders are all created on PVE-HP2 root system and share the capacity of that drive.

that's what I tried to explain to you in my earlier response. a shared storage requires the underlying storage to present the content in an identical manner on all nodes where it is enabled. a local storage doesn't. if you configure a directory on your host as shared that is actually local, you are lying to PVE and things will break (this is basically what you did and why the migration failed "because the disk didn't exist").

what you normally do if you have a shared storage:
- setup the shared storage (e.g., configure ceph cluster, create NFS export/CIFS share, ..)
- create a single storage entry for that storage
- you will now see this storage on each node, and the contents and usage should be identical
- you can now migrate using this storage, and PVE will skip the disks on it since they already "exist" on the target node

what you normally do if you don't have a shared storage:
- setup a directory/LVM VG/ZFS pool/.. on each node (e.g., backed by a local disk, mounted on /mnt/pve/foobar on each node. or a ZFS pool + dataset with the same name on each node. or a LVM VG with the same name on each node)
- create a single storage entry for those directories/...
- you will now "see" this storage on each node, but the contents and usage will be different on each node
- you can now migrate using this storage, and PVE will copy the disks as part of the migration from one node to the other

if you want to switch storage while migrating (which is only needed if you have a different storage.cfg entry for each node!), you need to either use the API or CLI, or live migration where this feature is already exposed on the UI. we will expose target storage selection also for offline migration at some point, but that doesn't change that your current configuration was broken and is still non-standard and sub-optimal.
 
Last edited:
  • Like
Reactions: Johannes S and UdoB
if you configure a directory on your host as shared that is actually local, you are lying to PVE and things will break (this is basically what you did and why the migration failed "because the disk didn't exist").
But it wasn't the shared storage that it was trying to put it in. That's what I was trying to explain. The storage it is on pve-hp1 is not the shared storage. I don't know why you keep overlooking this and keep trying to say this is the reason it broke.

the UI doesn't execute the CLI, it does API calls. and yes, we don't put everything that the API can do on the UI...
Ok, this makes sense

- you will now see this storage on each node, and the contents and usage should be identical
Problem is is that all storages showed up on the added node to the Cluster. Didn't matter if they were shared or not. This is one of big reasons why it was so confusing.

we will expose target storage selection also for offline migration at some point
this is also confusing because target storage does show up as available under specific conditions, such as target default storage doesn't have enough space.



I'm understanding it a whole lot better now, thank you for going through and breaking it down. I just wish it was a little more straight forward as to what was going on. I didn't understand why some times target storage option showed up, and other times it didn't. It wasn't very clear as to why.
 
But it wasn't the shared storage that it was trying to put it in. That's what I was trying to explain. The storage it is on pve-hp1 is not the shared storage. I don't know why you keep overlooking this and keep trying to say this is the reason it broke.
in one of your examples the disk was on that storage though, which caused that attempt to fail - which is what I've been telling you ;)
Problem is is that all storages showed up on the added node to the Cluster. Didn't matter if they were shared or not. This is one of big reasons why it was so confusing.
this is not a problem, this is how storages work in PVE. unless you explicitly restrict a storage (no matter if local or shared), it is configured for all nodes. this is the default, and how you should configure your storage unless there is a reason to deviate from that (e.g., if you have a slow local storage on all nodes, and a fast local storage on a subset of nodes, that fast storage obviously will need to be restricted to just the nodes where it exists. or if one half of your cluster uses LVM, and the other half uses ZFS).

this is also confusing because target storage does show up as available under specific conditions, such as target default storage doesn't have enough space.

I think you are confused by all the experiments and config changes you have done, as that is not the case.

there is an API endpoint that determines where a VM can migrate to:

https://pve.proxmox.com/pve-docs/api-viewer/index.html#/nodes/{node}/qemu/{vmid}/migrate

it will return which local disks or resources are configured, and which nodes are allowed or not allowed as migration targets (and why).

you can see here the code that implements this check:

https://git.proxmox.com/?p=qemu-ser...98f6e858dcc475937d3e112793384ef;hb=HEAD#l4821
https://git.proxmox.com/?p=qemu-ser...d8ee180b21bef3636d2c0dab20e1bdb;hb=HEAD#l2632

note how there are no checks for size involved at all..

and here you can see where the decision in the UI is made whether a target node is allowed or not:

https://git.proxmox.com/?p=pve-mana...5ce581168ffe7c2f254bfdf796255420;hb=HEAD#l226

and here you can see how the target storage selector is hidden or made visible:

https://git.proxmox.com/?p=pve-mana...d5ce581168ffe7c2f254bfdf796255420;hb=HEAD#l48
 
  • Like
Reactions: Johannes S
in one of your examples the disk was on that storage though
1747137763031.png
Storage2 was not the shared storage. The destination storage is the one that I changed to shared to see if it would allow me to send to it

I think you are confused by all the experiments and config changes you have done
I haven't done a lot of changes, tho. That's the thing. I'd change a storage to shared, didn't work, revert it back. Try to migrate while running, option didn't show up. Try to migrate while shutdown, option didn't show up.
Then I stumbled on a VM that was too big for the storage it was trying to send to by default, Storage2, and that's when the option showed up and I realized that destination selection was actually possible.

The biggest thing I don't understand at this point is why the Target Destination doesn't always show up. If it can show up when the default storage is full, then why can't it show up always?? That was the biggest issue that drove me to come here and inquire about it. Even if there are 2 different API calls on the backend doing the work, and even if there was a difference between Online/Offline, why wasn't it always showing up?

Apparently the UI is already doing the proper calls in either case, enough space available vs not available. That's why I was confused


Considering that several checks are for whether the VM is "running" or not to determine if it can be migrated, I don't see why that would be the limiting factor for a storage selector to show up or not. What's the difference between an Online and an Offline VM, other than one is using active resources? I would think that Offline would be easier to move.

In the 'formulas' call, there's the line: "return gettext('Restart Mode');". So in this instance it shuts down the VM, moves it, then restarts it. Essentially it's putting the VM in Offline mode temporarily and then brings it back Online after a successful move.

I just would like to add that I'm not trying to be difficult, please excuse me if it appears that way. I'm trying to better understand this all around plus in case anyone else comes across this same situation they can get as much info in a single thread about what's going on instead of going nuts looking at various bits/pieces of other threads. I appreciate the time taken out to show the actual code (even tho it's a little bit of my tree line for my really understand). ;)
 
The biggest thing I don't understand at this point is why the Target Destination doesn't always show up. If it can show up when the default storage is full, then why can't it show up always?? That was the biggest issue that drove me to come here and inquire about it. Even if there are 2 different API calls on the backend doing the work, and even if there was a difference between Online/Offline, why wasn't it always showing up?

Apparently the UI is already doing the proper calls in either case, enough space available vs not available. That's why I was confused

no it does not. the target storage selector will show up if the VM is running and has local disks. it will not show up if either of those two conditions is not met (as the target-storage option is not available on the UI for offline migrations). in addition, the target node is considered invalid if the VM is not running, but local disks are used and their storage is not available on the target node (because again, this would require offline migration with a target-storage, which is not available on the UI).

Considering that several checks are for whether the VM is "running" or not to determine if it can be migrated, I don't see why that would be the limiting factor for a storage selector to show up or not. What's the difference between an Online and an Offline VM, other than one is using active resources? I would think that Offline would be easier to move.

like I said - live migration came first. offline migration with local disks and switching storages came later, and was never added as a feature on the UI. it is probably stable enough nowadays that it can be exposed as an option on the UI nowadays, but that step wasn't done yet.

In the 'formulas' call, there's the line: "return gettext('Restart Mode');". So in this instance it shuts down the VM, moves it, then restarts it. Essentially it's putting the VM in Offline mode temporarily and then brings it back Online after a successful move.

that is for containers, which don't have live migration at all. your examples were all for VMs, as were my replies. the UI code is shared across both.


I will try to summarize it for you once more:

you should restructure your storage.cfg so you have a single "logical" entry for your local directory storage that is valid for both/all nodes. the contents of that storage will be different on the nodes, but that is okay - PVE knows how to handle that. if you use ZFS for both nodes, the same applies there as well (create the same pool/dataset structure on both nodes, and define a single storage for that). then both online and offline migration should work and keep the disks on the same storage. if you want to change storage, you can use live migration or the move disk feature (or use qm migrate with --targetstorage ...).
 
  • Like
Reactions: UdoB and Johannes S