Issue with Remote-Migrate when VM have discard checked on their disk

Jan 16, 2022
we have found a post similar to the issue we have when trying to move a VM from a Ceph cluster to another one.
they suggested to disable Discard , what we did and seem to fix the issue.

Source Cluster as the VM on a RBD Ceph and the destination is a CephFS as recommended by Proxmox team as RBD is not supported yet as target.

here is the issue we have after a while , it cause also the starting of the migration to take multiple minute before the process start and eventually fail.

drive-scsi0: transferred 97.0 GiB of 1.0 TiB (9.47%) in 16m 57s
drive-scsi0: transferred 97.2 GiB of 1.0 TiB (9.49%) in 16m 58s
drive-scsi0: transferred 97.4 GiB of 1.0 TiB (9.51%) in 16m 59s
drive-scsi0: Cancelling block job
CMD websocket tunnel died: command 'proxmox-websocket-tunnel' failed: interrupted by signal

drive-scsi0: Done.
2023-05-31 01:19:21 ERROR: online migrate failure - block job (mirror) error: interrupted by signal
2023-05-31 01:19:21 aborting phase 2 - cleanup resources
2023-05-31 01:19:21 migrate_cancel
2023-05-31 01:19:21 ERROR: writing to tunnel failed: broken pipe
2023-05-31 01:19:21 ERROR: migration finished with problems (duration 00:17:01)

related post :

is there a fix for that to avoid disabling discard ?
Last edited:

can you post your vm + storage config (from both sides), the remote migrate command you used, the full log + the journal during that time of both sides?


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!