Proxmox cluster shared iscsi storage problem with directly connected hosts

ilke

New Member
Jun 19, 2023
9
0
1
Hi,

I have two proxmox 8.0 hosts in a cluster, conected (each) directly to iscsi storage (don't have SAN switch yet).
I have multipath installed and configured, and successfuly see storage from both hosts if I create separate iscsi targets ( Cluster -> Storage -> Add -> iscsi) for both hosts with separate IP addresses.

The idea is to have same LVM shared to both hosts.
Can this be done if I have separate cables and separate iscsi IP addresses for both hosts on storage side?

Chears!
 
OK, thank you for fast reply. I am not shure what to do next - how to use mpath device? Each host has its own path ( I am aware this is not multipath), but when I create new LVM Cluster -> Storage -> Add -> LVM, I need to create two again? Based on separate iscsi storages I already created?
Do I need BOND of used interfaces on Storage side?
 
@bbgeek17 maybe you know the answer (or at least a better one)- in such a configuration, would it be possible to map a target directly to a vm AND have it support live migration? specifically if using a sas based storage (iscsi can probably be handled in guest directly)
 
but when I create new LVM Cluster -> Storage -> Add -> LVM, I need to create two again?
no. you'd follow all the same steps as if you dont have multipathing, just use the multipath device for your pv (eg, vgcreate my_vg /dev/mapper/mpath0 or wwn-xxx if you didnt specify friendly names) and then the vg would show up normally.
 
no. you'd follow all the same steps as if you dont have multipathing, just use the multipath device for your pv (eg, vgcreate my_vg /dev/mapper/mpath0 or wwn-xxx if you didnt specify friendly names) and then the vg would show up normally.
OK, thank you, I think I understand enough to research, and I will be back with report or more questions :)
 
don't have SAN switch yet
iSCSI is ethernet, so an ordinary ethernet switch will suffice.

would it be possible to map a target directly to a vm AND have it support live migration?
AFAIK, only with ZFS-over-iSCSI, which needs ZFS and and a specific targetcli supported initiator. Each connection will be a specific LUN and it can be live migrated (and snapshotted etc.), yet IIRC no multipath at the moment.
 
OK, thank you for fast reply. I am not shure what to do next - how to use mpath device? Each host has its own path ( I am aware this is not multipath),
@alexskysilk is correct, you should use the mpathX device as a target of your LVM creation. You said that you installed and configured multipath, so I assumed you had seen the resulting devices. You can check with "multipath -ll"

@bbgeek17 maybe you know the answer (or at least a better one)- in such a configuration, would it be possible to map a target directly to a vm
You can always map iSCSI directly to VM , you'd loose any hypervisor visibility but _may_ gain some simplicity.
AND have it support live migration?
PVE is responsible for Live migration and unless you have PXE boot from iSCSI, there is still some sort of boot volume the VM needs before it establishes internal network/iSCSI connections. I dont know whether PXE/bios can handle VM state transfer by KVM across nodes.

specifically if using a sas based storage (iscsi can probably be handled in guest directly)
in iSCSI you have a target (server) and initiator (client). Whether its SAS, NVMe or SCSI jbod - there needs to be an iSCSI target, and the VM is the initiator. There are two pieces to HA: the initiator/client is handled by PVE/KVM, the target also needs HA and thats more complex.

AFAIK, only with ZFS-over-iSCSI, which needs ZFS and and a specific targetcli supported initiator.
I would not have considered having KVM establish iSCSI connection that is passed through to VM as virtio disk - "map target directly to VM". But I do see how it can be interpreted that way, and in that context you are correct. I read the question as VM establishes the iSCSI connection within the context of its Bios/OS.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
dammit I specifically asked for a better answer :lol:

I didnt intend to hijack the thread, but since the initial question is resolved... I have a SAS based storage in the lab (4 connections/controller, 2 controllers) and I'm trying to come up with the best way to use it. I have 8 LUNS created all mapped to 4 nodes with multipathing.

the bulk of the storage is going to be used for filestore, so my idea was to just pass the mpath devices to separate vms to act as filer heads when it occured to me that the vms would not be able to migrate in this configuration. the alternative is simply to create large qcow2 files on each lun but I'd rather try to limit the write amplification this will cause. floor is open for smarter people then me :)

edit more info- guests (filerheads) will be windows based with nvr capture clients, making it advantageous to present raw luns for guest native file system management..
 
Last edited:
the bulk of the storage is going to be used for filestore, so my idea was to just pass the mpath devices to separate vms to act as filer heads when it occured to me that the vms would not be able to migrate in this configuration. the alternative is simply to create large qcow2 files on each lun but I'd rather try to limit the write amplification this will cause. floor is open for smarter people then me
I dont think you have much choice beyond LVM thick with perhaps entire LUN sliced and attached to single VM, or using clustered filesystem where you are forced to qcow.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Finally it works.
Thank you all for good discussion.
Your replies helped a lot.

1. My first mistake was not adding wwid with multipath -a <wwid> to second host in a cluster.
I have blacklist * in my multipath.conf file.

2. The second thing learned from this thread is that I need to create volume group on one of the hosts:
vgcreate vg_name /dev/mapper/multipath_name
It created volume group that after some short time showed up on both hosts (probably because it is based on multipath device?)

3. After this I needed to create LVM on cluster level Cluster -> Storage -> Add -> LVM, based on Existing volumes (vg_name), and shared ticked.
This LVM was available to both hosts in the cluster, and I can sucessfuly use it as VM disk, which is capable for live migration which I tested.

I had a lot of problems educating myself on this subject, I will try to make additional effort and give some text based guide here.
 
Last edited:
I wonder if the same scenario could work on Fiber Chanel block storage?
Yes, of course whether you use FC or iSCSI protocol to deliver block devices to host - the end result is the same. They are fed into multipath, which would be fed into LVM. Make sure you use Thick LVM, which is the only supported one with shared storage, and the only way to avoid data corruption. The negative is the absence of snapshot support, thin provisioning, clones, etc.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Thank you. I am aware of those disadvantages, I am thinking about CephFS as a backup, but that could come in the future.
 
Thank you. I am aware of those disadvantages, I am thinking about CephFS as a backup, but that could come in the future.
Ceph is completely different animal, with different: architecture, purpose, protocols, minimal requirements. The primary similarity is that the VM sees the end result as block device, regardless if its Ceph or iSCSI/FC/LVM.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Ceph is completely different animal, with different: architecture, purpose, protocols, minimal requirements. The primary similarity is that the VM sees the end result as block device, regardless if its Ceph or iSCSI/FC/LVM.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Yes, but since it supports snapshots, it could be second storage.
 
I would not have considered having KVM establish iSCSI connection that is passed through to VM as virtio disk - "map target directly to VM". But I do see how it can be interpreted that way, and in that context you are correct. I read the question as VM establishes the iSCSI connection within the context of its Bios/OS.
Oh yeah, you're right. I really don't see any problems with doing it inside of the VM, it's a separate and disjunct problem which lives on its own. I use this for storage testing myself in virtualizing my own multi-portal iSCSI target with cliens initiating with multipath, everything running inside of PVE.

If you bind the LUN inside of your VM, you will not have any problems with multipathing and VM live migration. The communication is done on different layers. So @alexskysilk, you can just map your iSCSI LUNs directly in your VM and format it there and it'll perfectly live-migrateable. I would not go the way over the hypervisor unless you want to share the LUN with other VMs.
 
Stipulated true for iscsi. for SAS or fc... I dont believe so but I intend to find out.
Yeah, it's not going to fly. SAS and FC cannot (yet) be virtualized and need to be passthroughed to work, so live migration is not possible. iSCSI is just ethernet and that is already virtualized.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!