Cluster with multiple ISCSI connect

Apr 1, 2023
2
1
8
Hello everybody ...

I am in need of a big help...
I will explain my environment.

I have a Storage DELL ME5012 with 2 controllers and 4 fiber channel ports.
They are configured like this

Controller A: Port 1 -> IP 172.16.0.1/24
Controller A: Port 3 -> IP 172.16.2.1/24

Controller B: Port 1 -> IP 172.16.1.1/24
Controller B: Port 3 -> IP 172.16.3.1/24

The storage only has a single 7TB LUN.

I have 2 DELL Hosts with 2 fiber channel ports each.

Host A (lion): -> IP 172.16.0.2/24
Host A (lion): -> IP 172.16.1.2/24

Host B (pantrho): -> IP 172.16.2.2/24
Host B (pantrho): -> IP 172.16.3.2/24

The cables are connected directly from the servers to the storage and with the ping test working.

I set up a cluster with Hosts A and B.

On Host A I create the cluster and on B I do the join.

Now I add the LUN in the respective Hosts:

Host A: STORAGE_0.1 and STORAGE_1.1
Host B: STORAGE_2.1 and STORAGE_3.1

Then I create an LVM to use in my virtual machines. In Base Storage, I choice EXISTING VOLUME GROUPS.LVM.png

Host A connects, but Host B always has the ? in volume And I can't migrate machine 100 from Host A to Host B

prox.png

What is the correct way to assemble this environment that I have?

Thank you.

ps: sorry about my English.
 

Attachments

  • LVM.png
    LVM.png
    60.8 KB · Views: 10
You need to :
a) remove any LVM configuration you managed to create
b) remove most of the proxmox configuration, ie 4 storage devices
c) review https://pve.proxmox.com/wiki/ISCSI_Multipath , do a few searches on "proxmox multipath" and create a proper multipath configuration
d) provision LVM VG on top of multipath device
e) point LVM storage type in proxmox to the VG
f) enjoy


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hello
I have similar problem.
Two Dell's R640 conected directly to iSCSI Dell ME4024
Dell_1 iSCSI
eno1 1.1.1.2 -> A0 1.1.1.1
eno2 3.3.3.2 -> B0 3.3.3.1
vmbr0 eno3 192.168.179.241/24
Dell_2 iSCSI
eno1 2.2.2.2 -> A1 2.2.2.1
eno2 4.4.4.2 -> B1 4.4.4.1
vmbr0 eno3 192.168.179.243/24
I made Cluster (avaliable from 192.168.179.241 and 243)
The goal is to use one big shared LunaA for two hosts(nodes).
I mapped them via iSCSI, configured iscsid.conf, and multipath.conf.
In iscsid.conf only 3 changes:
node.startup = automatic
node.leading_login = No
node.session.timeo.replacement_timeout = 15
multipath.conf:
"
##Default System Values
defaults {
user_friendly_names yes
find_multipaths yes
max_fds 8192
polling_interval 5
queue_without_daemon no
}
Blacklist Exceptions
blacklist_exceptions {
wwid "35002538f0263231b"
wwid "35002538f02632328"
device {
vendor "DellEMC"
product "ME4"
}
}
Dell Device Configuration
devices {
device {
vendor "DellEMC"
product "ME4"
path_grouping_policy group_by_prio
path_checker "tur"
hardware_handler "1 alua"
prio "alua"
failback immediate
features "2 pg_init_retries 50"
rr_weight "uniform"
path_selector "service-time 0"
}
}
multipaths {
multipath {
wwid "35002538f0263231b"
}
multipath {
wwid "35002538f02632328"
}
}
"
We pinnging from Dell1 1.1.1.1, 3.3.3.1
and from Dell2 1.1.1.1, 2.2.2.1, 4.4.4.1
any advices ?What next?Bez tytułu.png
 
Last edited: