Slow disk migration from iSCSI (FlashME5) to CephStorage

Mar 19, 2025
3
0
1
Hi,
I’m experiencing very slow disk migration in my Proxmox cluster when moving a VM disk from FlashME5 (iSCSI multipath + LVM) to CephStorage (RBD).
Migration from Ceph → FlashME5 is fast (~500–700 MB/s), but in reverse (FlashME5 → Ceph) I only get ~200 MB/s or less.

Environment:

  • Proxmox 8.x
  • FlashME5 connected via iSCSI with multipath (multipath -ll shows 4 paths, 2 active).
  • Ceph configured properly and fast otherwise.
  • VM is powered off during migration (qm move_disk).
  • Disks: aio=threads, cache=writeback, tried discard=ignore.
fio test on FlashME5:

Code:
WRITE: bw=7002MiB/s (7342MB/s), IOPS=1.7M+

During migration:


Code:
transferred 258.6 MiB of 25.0 GiB (1.01%)
transferred 517.1 MiB of 25.0 GiB (2.02%)
...

Tried so far:
  • rbd_cache = true
  • discard=ignore
  • Different cache modes
  • qm migrate with --with-local-disks --online 0
Still very slow. Any ideas what could be the bottleneck?
Would using dd + rbd import help in this case, or is it an RBD/QEMU thing?
Thanks in advance!