How to setup Shared FC-SAN to Proxmox

IT_Tech

Member
Oct 20, 2021
2
0
6
40
I am completely new to proxmox. Currently, we have 2Dell hosts each has 2 FC connected directly to the Dell MD3820f Storage FC's. On SAN we created a single virtual disc shared with the 2 hosts on LUN port 0. The Nodes are seeing the Storage.. Now how to configure the LVM over ISCSI with Multipath?

Thanks in advance


Config:


root@Test:~# lspci -nn | grep -i hba
06:00.0 Fibre Channel [0c04]: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA [1077:2532] (rev 02)
06:00.1 Fibre Channel [0c04]: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA [1077:2532] (rev 02)
root@Test:~# ls -l /sys/class/fc_host
total 0
lrwxrwxrwx 1 root root 0 Oct 20 13:44 host3 -> ../../devices/pci0000:00/0000:00:07.0/0000:06:00.0/host3/fc_host/host3
lrwxrwxrwx 1 root root 0 Oct 20 13:44 host4 -> ../../devices/pci0000:00/0000:00:07.0/0000:06:00.1/host4/fc_host/host4
root@Test:~# more /sys/class/fc_host/host?/port_name
::::::::::::::
/sys/class/fc_host/host3/port_name
::::::::::::::
0x21000024ff5cab56
::::::::::::::
/sys/class/fc_host/host4/port_name
::::::::::::::
0x21000024ff5cab57
root@Test:~# fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-|type|identifier'
Disk /dev/sdb: 67.75 GiB, 72746008576 bytes, 142082048 sectors
Disk model: PERC 6/i
Disk /dev/sda: 67.75 GiB, 72746008576 bytes, 142082048 sectors
Disk model: PERC 6/i
Disk /dev/sdc: 14.64 TiB, 16097004748800 bytes, 31439462400 sectors
Disk model: MD38xxf
Disk /dev/sdd: 14.64 TiB, 16097004748800 bytes, 31439462400 sectors
Disk model: MD38xxf
root@Test:~# systool -c fc_host -v
Class = "fc_host"

Class Device = "host3"
Class Device path = "/sys/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host3/fc_host/host3"
dev_loss_tmo = "45"
fabric_name = "0xffffffffffffffff"
issue_lip = <store method only>
max_npiv_vports = "254"
node_name = "0x20000024ff5cab56"
npiv_vports_inuse = "0"
port_id = "0x0000e8"
port_name = "0x21000024ff5cab56"
port_state = "Online"
port_type = "LPort (private loop)"
speed = "8 Gbit"
supported_classes = "Class 3"
supported_speeds = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
symbolic_name = "QLE2562 FW:v8.07.00 DVR:v10.02.00.104-k"
system_hostname = ""
tgtid_bind_type = "wwpn (World Wide Port Name)"
uevent =
vport_create = <store method only>
vport_delete = <store method only>

Device = "host3"
Device path = "/sys/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host3"
fw_dump =
issue_logo = <store method only>
nvram = "ISP "
optrom_ctl = <store method only>
optrom =
reset = <store method only>
sfp = ""
uevent = "DEVTYPE=scsi_host"
vpd = "."


Class Device = "host4"
Class Device path = "/sys/devices/pci0000:00/0000:00:07.0/0000:06:00.1/host4/fc_host/host4"
dev_loss_tmo = "45"
fabric_name = "0xffffffffffffffff"
issue_lip = <store method only>
max_npiv_vports = "254"
node_name = "0x20000024ff5cab57"
npiv_vports_inuse = "0"
port_id = "0x0000ef"
port_name = "0x21000024ff5cab57"
port_state = "Online"
port_type = "LPort (private loop)"
speed = "8 Gbit"
supported_classes = "Class 3"
supported_speeds = "1 Gbit, 2 Gbit, 4 Gbit, 8 Gbit"
symbolic_name = "QLE2562 FW:v8.07.00 DVR:v10.02.00.104-k"
system_hostname = ""
tgtid_bind_type = "wwpn (World Wide Port Name)"
uevent =
vport_create = <store method only>
vport_delete = <store method only>

Device = "host4"
Device path = "/sys/devices/pci0000:00/0000:00:07.0/0000:06:00.1/host4"
fw_dump =
issue_logo = <store method only>
nvram = "ISP "
optrom_ctl = <store method only>
optrom =
reset = <store method only>
sfp = ""
uevent = "DEVTYPE=scsi_host"
vpd = "."


root@Test:~# echo "1" > /sys/class/fc_host/host4/issue_lip
root@Test:~# echo "1" > /sys/class/fc_host/host3/issue_lip
root@Test:~# rescan-scsi-bus.sh
Scanning SCSI subsystem for new devices
Scanning host 0 for SCSI target IDs 0
1
2
3
4
5
6
7, all LUNs
Scanning for device 0 0 0 0 ...
OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: PLDS Model: DVD+-RW DS-8A4S Rev: JD51
Type: CD-ROM ANSI SCSI revision: 05
...............................Scanning host 1 for SCSI target IDs 0
1
2
3
4
5
6
7, all LUNs
Scanning host 2 for SCSI target IDs 0
1
2
3
4
5
6
7, all LUNs
Scanning for device 2 0 32 0 ...
OLD: Host: scsi2 Channel: 00 Id: 32 Lun: 00
Vendor: DP Model: BACKPLANE Rev: 1.07
Type: Enclosure ANSI SCSI revision: 05
Scanning for device 2 2 1 0 ...
OLD: Host: scsi2 Channel: 02 Id: 01 Lun: 00
Vendor: DELL Model: PERC 6/i Rev: 1.22
Type: Direct-Access ANSI SCSI revision: 05
Scanning for device 2 2 0 0 ...
OLD: Host: scsi2 Channel: 02 Id: 00 Lun: 00
Vendor: DELL Model: PERC 6/i Rev: 1.22
Type: Direct-Access ANSI SCSI revision: 05
Scanning host 3 for all SCSI target IDs, all LUNs
Scanning for device 3 0 0 0 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 00
Vendor: DELL Model: MD38xxf Rev: 0825
Type: Direct-Access ANSI SCSI revision: 05
Scanning for device 3 0 0 31 ...
OLD: Host: scsi3 Channel: 00 Id: 00 Lun: 31
Vendor: DELL Model: Universal Xport Rev: 0825
Type: Direct-Access ANSI SCSI revision: 05
Scanning host 4 for all SCSI target IDs, all LUNs
Scanning for device 4 0 0 0 ...
OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 00
Vendor: DELL Model: MD38xxf Rev: 0825
Type: Direct-Access ANSI SCSI revision: 05
Scanning for device 4 0 0 31 ...
OLD: Host: scsi4 Channel: 00 Id: 00 Lun: 31
Vendor: DELL Model: Universal Xport Rev: 0825
Type: Direct-Access ANSI SCSI revision: 05
0 new or changed device(s) found.
0 remapped or resized device(s) found.
0 device(s) removed.
 
hi @IT_Tech

There is no place for iSCSI in your current setup. Your storage is attached via FC.

Now that you already have FC storage successfully presented to both hosts you should find Dell document that describes Multipath configuration for this storage Model. Its probably very vanilla and standard config, however I do recommend going with official documentation over a blog.

To setup an LVM pool on your new storage you should follow steps described here : https://pve.proxmox.com/wiki/Storage:_LVM

Keep in mind that your storage setup is "shared", that means only "Thick" LVM is supported and there is no snapshot support. You can refer to table on this page for more information: https://pve.proxmox.com/wiki/Storage


Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!