Questions regarding migration between nodes

Oct 10, 2022
48
2
13
I'm in the process of planning to move multiple CTs/VMs to a new machine, and the method I initially chose is creating a cluster and joining the new machine. Then, I will migrate the CTs/VMs.

1- My old PVE is using LVM/LVM-thin. If I join the new node also with LVM/LVM-thin I'm getting only the option to run Offline migrations or Restart Mode when I try to move LXC containers. Why is live-migration only available for VMs?

2- Why live-migration is not available in my cluster, even for VMs?

3- I'm thinking about changing the new node to use ZFS instead. Is it possible to migrate CTs/VMs to a node that has ZFS?
 
Last edited:
Hi,

1- My old PVE is using LVM/LVM-thin. If I join the new node also with LVM/LVM-thin I'm getting only the option to run Offline migrations or Restart Mode when I try to move LXC containers. Why is live-migration only available for VMs?
since VMs are more abstracted than Containers, it's easier to just pause them and continue on a different node. With containers, you have resource dedicated to the running process, like file descriptors, network sockets, etc. that are not so easy to seamlessly transport to another machine.

2- Why live-migration is not available in my cluster, even for VMs?
Good question indeed, what's the error if you try to do so?

3- I'm thinking about changing the new node to use ZFS instead. Is it possible to migrate CTs/VMs to a node that has ZFS?
The file system should be abstracted away when transferring the VMs, so the underlying FS should not play a role in it.

Btw, if you are only joining the nodes to transfer the VMs and don't intend to keep them in a cluster, you could also try qm remote-migrate:
Code:
USAGE: qm remote-migrate <vmid> [<target-vmid>] <target-endpoint> --target-bridge <string> --target-storage <string> [OPTIONS]

  Migrate virtual machine to a remote cluster. Creates a new migration
  task. EXPERIMENTAL feature!

  <vmid>     <integer> (100 - 999999999)
        The (unique) ID of the VM.

  <target-vmid> <integer> (100 - 999999999)
        The (unique) ID of the VM.

  <target-endpoint> apitoken=<PVEAPIToken=user@realm!token=SECRET>
        ,host=<ADDRESS> [,fingerprint=<FINGERPRINT>] [,port=<PORT>]
        Remote target endpoint

  --bwlimit  <integer> (0 - N)    (default=migrate limit from datacenter or
        storage config)
        Override I/O bandwidth limit (in KiB/s).

  --delete   <boolean>    (default=0)
        Delete the original VM and related data after successful
        migration. By default the original VM is kept on the source
        cluster in a stopped state.

  --online   <boolean>
        Use online/live migration if VM is running. Ignored if VM is
        stopped.

  --target-bridge <string>
        Mapping from source to target bridges. Providing only a single
        bridge ID maps all source bridges to that bridge. Providing
        the special value '1' will map each source bridge to itself.

  --target-storage <string>
        Mapping from source to target storages. Providing only a
        single storage ID maps all source storages to that storage.
        Providing the special value '1' will map each source storage
        to itself.
 
@Folke, thank you for the very detailed and prompt reply!!!

During the process of detaching the secondary node I ran the commands in the wrong host, so right now I'm trying to get the cluster (or at least the main node) restored, see issue #152519.

I will get back to this thread once I resolve the other issue.