ZFS, iSCSI, and multipathd

Josh Williams

Active Member
May 5, 2018
2
3
43
48
I just brought a new disk shelf into service, and before I fill it up and add another chunk of storage to my 3 node cluster, I wanted to perform one last sanity check.

I know that ZFS over iSCSI works in Proxmox (and works really well). But I want to verify that currently there is no multipath support?
 
Afaik no, because native iscsi support in qemu lacks multipath (limitation from qemu, not proxmox)
Just wondering what qemu is the show stopper here?

maybe it’s my lack of understanding but shouldn’t multipathing be determined by the OS in this case Debian ?

would be interested in understanding this issue in more detail.

and since it’s not currently supported what’s the beat way around this issue?

lacp on the switch ?
Port binding in ProxMox ?

“”Cheers
G
 
With zfs over iscsi, qemu is managing the iscsi connection, without the host OS being involved at all. Multipath support would require either libiscsi or qemu itself to handle mpio (just as multipathd does for the kernel). Looks like libiscsi will not add it, as it would make it less portable. Another option is to add two independant disks to the guest, and let the guest os manage multipath.
 
With zfs over iscsi, qemu is managing the iscsi connection, without the host OS being involved at all. Multipath support would require either libiscsi or qemu itself to handle mpio (just as multipathd does for the kernel). Looks like libiscsi will not add it, as it would make it less portable. Another option is to add two independant disks to the guest, and let the guest os manage multipath.

wondering how vmware does it?

how have they dealt with the problem?
 
Don't know, but not very relevant as they are using a completly different stack.
sure I understand your comment, but it’s

relevant in how did they deal with the issue as there is obviously a solution out there that works for VMware and Hyper-v (Windows), I am aware these both have a different stack but understanding how something works even if it’s different overall may reveal some similarities since the protocol is the same they will face similar issues.
 
Well, there are two ways to add mpio support. First would be to switch to the host iscsi stack, instead of using the one from qemu. But this would have a few drawbacks. The second is to add mpio support in qemu stack. And this requires either libiscsi or qemu to implement it. I'm not aware of any plan to do it on either side, but maybe I'm wrong
 
  • Like
Reactions: velocity08
Well, there are two ways to add mpio support. First would be to switch to the host iscsi stack, instead of using the one from qemu. But this would have a few drawbacks. The second is to add mpio support in qemu stack. And this requires either libiscsi or qemu to implement it. I'm not aware of any plan to do it on either side, but maybe I'm wrong

danielb what do you see as issues with host based mpio that we should look out for.

””Cheers
Gerardo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!