Issue with Remote-Migrate when VM have discard checked on their disk

Jan 16, 2022
195
8
23
38
we have found a post similar to the issue we have when trying to move a VM from a Ceph cluster to another one.
they suggested to disable Discard , what we did and seem to fix the issue.

Source Cluster as the VM on a RBD Ceph and the destination is a CephFS as recommended by Proxmox team as RBD is not supported yet as target.

here is the issue we have after a while , it cause also the starting of the migration to take multiple minute before the process start and eventually fail.



drive-scsi0: transferred 97.0 GiB of 1.0 TiB (9.47%) in 16m 57s
drive-scsi0: transferred 97.2 GiB of 1.0 TiB (9.49%) in 16m 58s
drive-scsi0: transferred 97.4 GiB of 1.0 TiB (9.51%) in 16m 59s
drive-scsi0: Cancelling block job
CMD websocket tunnel died: command 'proxmox-websocket-tunnel' failed: interrupted by signal

drive-scsi0: Done.
2023-05-31 01:19:21 ERROR: online migrate failure - block job (mirror) error: interrupted by signal
2023-05-31 01:19:21 aborting phase 2 - cleanup resources
2023-05-31 01:19:21 migrate_cancel
2023-05-31 01:19:21 ERROR: writing to tunnel failed: broken pipe
2023-05-31 01:19:21 ERROR: migration finished with problems (duration 00:17:01)

related post : https://forum.proxmox.com/threads/cannot-live-migrate-with-discard-set.60119/

is there a fix for that to avoid disabling discard ?
 
Last edited:
hi,

can you post your vm + storage config (from both sides), the remote migrate command you used, the full log + the journal during that time of both sides?