Current setup is two servers connected to an uplink switch, but also a 10Gb cable directly between them.
Cluster is enabled. I have set migration to happen over the 10Gb DAC (confirmed working).
No shared storage, hardware raid mirroring of two 4TB SSDs on each server.
Running latest version (pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-7-pve)).
When I right click a standard VM and hit migrate, the process starts right away, but then stops and hangs for a good 5minute before it actually starts transferring data.
Is this normal? What are it doing in those minutes?
Cluster is enabled. I have set migration to happen over the 10Gb DAC (confirmed working).
No shared storage, hardware raid mirroring of two 4TB SSDs on each server.
Running latest version (pve-manager/8.1.4/ec5affc9e41f1d79 (running kernel: 6.5.11-7-pve)).
When I right click a standard VM and hit migrate, the process starts right away, but then stops and hangs for a good 5minute before it actually starts transferring data.
Is this normal? What are it doing in those minutes?
Code:
2024-02-02 08:25:08 use dedicated network address for sending migration traffic (172.16.0.1)
2024-02-02 08:25:09 starting migration of VM 102 to node 'pv01' (172.16.0.1)
2024-02-02 08:25:09 found local disk 'thin01_4tb:vm-102-disk-2' (attached)
2024-02-02 08:25:09 found local disk 'thin01_4tb:vm-102-disk-3' (attached)
2024-02-02 08:25:09 starting VM 102 on remote node 'pv01'
2024-02-02 08:25:12 volume 'thin01_4tb:vm-102-disk-2' is 'thin01_4tb:vm-102-disk-0' on the target
2024-02-02 08:25:12 volume 'thin01_4tb:vm-102-disk-3' is 'thin01_4tb:vm-102-disk-1' on the target
2024-02-02 08:25:12 start remote tunnel
2024-02-02 08:25:13 ssh tunnel ver 1
2024-02-02 08:25:13 starting storage migration
2024-02-02 08:25:13 scsi0: start migration to nbd:172.16.0.1:60001:exportname=drive-scsi0
drive mirror is starting for drive-scsi0
<<5-7min wait>>
drive-scsi0: transferred 571.0 MiB of 105.0 GiB (0.53%) in 5m 23s