iscsi direct luns with multipath

tvtue

New Member
Nov 22, 2024
5
2
3
Hello dear proxmox users,

we are thinking about migrating from ovirt to proxmox and have quite a lot of direct luns there. I saw that it is possible to use direct luns in proxmox but I haven't found a way to configure multipath with that. Please don't get me wrong, I found the wiki documentation about iscsi multipathing and using lvm on top. It is great and it works. But we need direct luns with multipath and without lvm on top because we would like to keep the data on the direct luns. That way we would not need to reinstall the vms nor we needed to migrate any data.
When I try to add an iscsi storage in the webgui I can only add one portal ip address. But the luns are accessible via two or even four (on a different san) ip addresses. Of course I can configure multipath by going all through the iscsiadm discovery login and adding the the luns to the multipath daemon. But I haven't found a way to add them as a shared storage device.
We could use only one path to every lun but that would deprive us of redundancy and performance.
Is there any other way to accomplish this?

Thanks in advance and regards
Timo
 
we are thinking about migrating from ovirt to proxmox and have quite a lot of direct luns there. I saw that it is possible to use direct luns in proxmox but I haven't found a way to configure multipath with that.
Direct LUN access is organized via using QEMU native ability to connect to iSCSI. QEMU lacks support for multipath.

Your best bet is to:
a) use iscsiadm to connect paths directly, bypassing PVE's scaffolding
b) implement multipath
c) use "qm set" command to point VM to resulting md device

or: connect to iSCSI directly from the VM. If you are, indeed, bypassing the PVE's scaffolding then there is no benefit of having the LUN connected to the Hypervisor and then passing it through.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Kingneutron
Thank you for your reply.

Direct LUN access is organized via using QEMU native ability to connect to iSCSI. QEMU lacks support for multipath.

What do you mean with "it lacks support for multipath"? Does it support LVM LVs or iSCSI LUNs in contrast to that? I thought that is just a device which is given to the qemu process, isn't it?

Your best bet is to:
a) use iscsiadm to connect paths directly, bypassing PVE's scaffolding
b) implement multipath

Implementing in QEMU you mean or where?

c) use "qm set" command to point VM to resulting md device

You mean the /dev/mapper/<wwid> device which results from configuring multipath?

or: connect to iSCSI directly from the VM.

That is not possible, because the OS disk is a directlun in ovirt, too. Also it some kind of security measure to no pass the storage vlan into the vms.
 
What do you mean with "it lacks support for multipath"? Does it support LVM LVs or iSCSI LUNs in contrast to that? I thought that is just a device which is given to the qemu process, isn't it?
Direct LUNs are implemented via this mechanism: https://www.qemu.org/docs/master/system/qemu-block-drivers.html#iscsi-luns
There is no support for multipath here.

LVM overlay for iSCSI is implemented via a different mechanism where it is possible to insert Multipath layer.

Implementing in QEMU you mean or where?
map the LUN to the hypervisor, add MP, use "qm set --scsi0 /dev/md/mpath_device" (the syntax is approximate, please find the correct syntax in "man qm").

That is not possible, because the OS disk is a directlun in ovirt, too
Sometimes you need to make adjustments when migrating between complex ecosystems. I don't know what "directlun" means in Ovirt context and whether there is a direct equivalent in PVE.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Direct LUNs are implemented via this mechanism: https://www.qemu.org/docs/master/system/qemu-block-drivers.html#iscsi-luns
There is no support for multipath here.

That looks like the "iscsidirect" variant of connecting iscsi luns to PVE. I saw similar iscsi:// urls in the man page of iscsi-ls which seems to be necessary for listing the luns. If there is no multipathing that is not the way we can go.

map the LUN to the hypervisor, add MP, use "qm set --scsi0 /dev/md/mpath_device" (the syntax is approximate, please find the correct syntax in "man qm").

Yeah, that sounds promising, I think I will test that. I wonder if vm live migration is working then. If I configure the devices on all pve nodes and make sure their names are the same, it should work, should'it?

Sometimes you need to make adjustments when migrating between complex ecosystems. I don't know what "directlun" means in Ovirt context and whether there is a direct equivalent in PVE.

I think "directluns" from ovirt are quite similar to what PVE has with "use lun directly". Nomen est omen, you get the lun as a normal virtual disk in the vm but as big as accordingly. Also snapshots are not possible with ovirt just like in PVE.
Ovirt does it a little nicer, you can do a discovery against multiple portal ips and if its the same lun-id it recognices two or more path to it. Would be cool to have that in PVE too. Just do a pvesm scan multiple times and the systems recognices the multiple paths.
 
Like said above, I did a little test and configured a multipath device on all pve nodes using iscsiadm discovery, login and multipath commands. Just like it is documentend in https://pve.proxmox.com/wiki/Multipath but I stopped at the step where one should continue with lvm on top of the multipath device.
I then went ahead and added the device to a vm directly using qm set <vmid> --scsi1 /dev/mapper/<wwid>,shared=1. With that I was able to live migrate the vm from to another pve node.
The disadvantage that I see is, that this not beeing reflected in the webui under datacenter->storage but only when you look into the vm hardware details.

I wonder why this is not beeing implemented with pvesm for example like pvesm add md ... or similar so that it appears under the datacenter storage page too. What am I missing here?

Of course, you have to configure a lot manually, if you have dozens of such multipath devices, but other than that, are there any other downsides or pitfall that I don't see?
 
  • Like
Reactions: alexskysilk
I would say that it is like that because nobody uses them that way, so the GUI doesn't really reflect that possibility. Maybe that could be a nice patch to provide? ;)
I don't think it's a problem, and you probably don't want all those LUNs to be displayed in the GUI anyway... so it's may need more than a patch, but a discussion about a way to display those nicely...
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!