Upgrading to new SANs, but can't live storage migrate iSCSI -> iSCSI

rkl

Renowned Member
Sep 21, 2014
20
2
68
We've got some SANs providing iSCSI LVM storage to Proxmox VMs and we're in the process of replacing them with new SANs (iSCSI LVM again). Is there any way to live migrate the storage for running VMs from the old SANs' iSCSI to the new SANs' iSCSI? I can't believe this is an uncommon feature request, but it appears to be unsupported by Proxmox 8.3 that we're running.

To test this, I made both the old and new SAN's iSCSI volumes (both identically sized LVMs) available in the storage list of a Proxmox 8.3 server and then attempted VM -> Hardware -> Had Disk (virtio0) -> Disk Action -> Move Storage -> Target Storage -> new SAN iSCSI volume (plus Disk Image: CH 00 ID 0 LUN 0), clicked on "Move disk" and then got this error:
TASK ERROR: storage migration failed: can't allocate space in iscsi storage

Interestingly, if you select Target Storage -> local (and pick qcow2 format) instead, the live storage migration from iSCSI to qcow2 *does* work (you obviously need enough local storage to house the qcow2 file). However, then trying to go from qcow2 to the new SAN iSCSI volume fails with that same TASK ERROR. Is there a technical reason that live storage migration to any LVM iSCSI destination doesn't work (I'm hoping it's just not been implemented rather than being technically impossible)? As it stands, there isn't a way for us to move a VM between two iSCSI-based SANs without shutting it down and copying it offline at the moment, which seems unsatisfactory especially if it's a large VM.
 
To test this, I made both the old and new SAN's iSCSI volumes (both identically sized LVMs) available in the storage list of a Proxmox 8.3 server and then attempted VM -> Hardware -> Had Disk (virtio0) -> Disk Action -> Move Storage -> Target Storage -> new SAN iSCSI volume (plus Disk Image: CH 00 ID 0 LUN 0),
The fact that you are referencing CH/ID/LUN indicates that you are not using LVM but direct iSCSI.
When using LVM/iSCSI the PVE storage pool does not interact with iSCSI directly. The operations are done on LVM Volume Group.
You should review your new setup and adjust as needed.

Good luck

P.S. you can use this KB article to assist: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: waltar
The KB article does look a bit tricky to me - I'm still confused as to why I can dd or drbd sync between two direct iSCSI volumes mounted on the same Proxmox server but from different SANs, but apparently I can't use Proxmox's live storage migration with the same setup. It feels like it should be technically possible because I'm not using the destination direct iSCSI volume for anything else (so I'm not sure why multipath has to be involved).
 
Hi @rkl , your messaging is somewhat contradictory.

  • Your said that you are using iSCSI with LVM overlay. In this forum it implies PVE managed LVM overlay. Perhaps you are using a different scheme? Output of the following commands may help: lsscsi, lsblk, blkid, cat /etc/pve/storage.cfg, pvesm status
  • You posted that you are using: CH 00 ID 0 LUN 0. This shows that you are not using PVE LVM overlay.
  • You state that you are using: "two direct iSCSI volumes". This further confirms that you are operating at raw iSCSI vs PVE managed LVM
Direct iSCSI and PVE LVM are two storage schemes with different features and capabilities.
  • A PVE disk move operation requires provisioning of the target disk.
  • Creating, Deleting, Expanding (and almost any other operation) on direct iSCSI is not supported.
  • PVE LVM storage scheme, on the other hand, supports almost all operations.
You are able to "dd" because you bypass all the PVE scaffolding and access disks directly for data copy only.

The KB article does look a bit tricky to me
The layering can appear to be complex. We tried to simplify it with the diagram at the beginning of the article.

You don't need to use Multipath unless you have multipath paths to the disk.

Cheers.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox