nodes + fabric + san (FiberChannel, openfiler)

M

MrMourik

Guest
I'm trying to setup a (test)cluster with a seperate FiberChannel network for thr SAN

my setup:

1 SAN with QLogic 2460
2 Nodes with QLogic 2460
1 McData 24ports 2Gbit FC SAN switch

The SAN is running Openfiler, and the FC + iSCSI targets seems to be working, even the switch is seeing all the devices.

OpenFiler:

Target: 21:00:00:1b:32:1c:c4:7f (san)
Group Initiators
21:00:00:1B:32:1C:DC:7E (node1)
21:00:00:1B:32:1C:74:80 (node2)

The switch gives all ports a F_Port

on both the nodes, i see this:

[root@node01:/# lspci | grep -i fibre
0d:00.0 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)

root@node01:/# dmesg | grep -i qlogic
qla2xxx [0000:00:00.0]-0005: QLogic Fibre Channel HBA Driver: 8.04.00.04.06.3-k.
qla2xxx [0000:0d:00.0]-00fb:5: QLogic QLE2460 - PCI-Express Single Channel 4Gb Fibre Channel HBA.

root@node01:/# dmesg | grep qla2x
qla2xxx [0000:00:00.0]-0005: QLogic Fibre Channel HBA Driver: 8.04.00.04.06.3-k.
qla2xxx 0000:0d:00.0: PCI INT A -> GSI 33 (level, low) -> IRQ 33
qla2xxx [0000:0d:00.0]-001d: Found an ISP2432 irq 33 iobase 0xffffc900057d6000.
qla2xxx 0000:0d:00.0: irq 89 for MSI/MSI-X
qla2xxx 0000:0d:00.0: irq 90 for MSI/MSI-X
qla2xxx 0000:0d:00.0: setting latency timer to 64
qla2xxx 0000:0d:00.0: firmware: requesting ql2400_fw.bin
scsi5 : qla2xxx
qla2xxx [0000:0d:00.0]-00fb:5: QLogic QLE2460 - PCI-Express Single Channel 4Gb Fibre Channel HBA.
qla2xxx [0000:0d:00.0]-00fc:5: ISP2432: PCIe (2.5GT/s x4) @ 0000:0d:00.0 hdma+ host#=5 fw=5.06.05 (9496)./QUOTE]

The weird thing is i DONT see anyting like LOOP

but...

root@node01:/# cat /sys/class/fc_host/host5/port_state
Online
root@node01:/# cat /sys/class/fc_host/host5/port_type
NPort (fabric via point-to-point)
root@node01:/# cat /sys/class/fc_host/host5/speed
2 Gbit

So it seems to be up & running

So i do a SCSI bus rescan:

[root@node01:/# ./sbin/rescan-scsi-bus.sh -l
Host adapter 5 (qla2xxx) found.
Scanning host 5 for all SCSI target IDs, LUNs 0 1 2 3 4 5 6 7
0 new device(s) found.
0 device(s) removed./QUOTE]

At this moment, i really cant find out what to do ...
Who has tips/ideas/solution?

root@node01:/# pveversion -v
pve-manager: 2.2-24 (pve-manager/2.2/7f9cfa4c)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-80
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-16-pve: 2.6.32-80
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-1
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-28
qemu-server: 2.0-62
pve-firmware: 1.0-21
libpve-common-perl: 1.0-36
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-34
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1
 
Allright,

the switch zone is oke.

If i do a rescan at a NODE, I see a session starting @ the openfiler ! Thats seems to be oké
But, no devices are detected. So i think its a configuration problem on the openfiler SAN.

I'm going to reconfigure and try again.
 
Finally!

REconfigured OpenFiler and now I have:

[2:0:0:0] disk SCST_BIO pvm_vm_0 200 /dev/sdc

..... but one problem left!!

How do I add it into proxmox ?

Do i need to create a LVM group with /dev/sdc ?
 
Last edited by a moderator:
Finally!

REconfigured OpenFiler and now I have:

[2:0:0:0] disk SCST_BIO pvm_vm_0 200 /dev/sdc

..... but one problem left!!

How do I add it into proxmox ?

Do i need to create a LVM group with /dev/sdc ?
In the GUI:
1) Data center
2) Tab storage
3) Drop-down add -> choose add iSCSI target
 
Right,

now i have a FiberChannel Target active on the SAN and working
NOT over iSCSI but via LVM

on the nodes:

Code:
 --- Logical volume ---  LV Path                /dev/fc_san01_1/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                fc_san01_1
  LV UUID                Ot3kwl-GEI3-UOmx-MEm2-52n3-dMEV-hzMVd3
  LV Write Access        read/write
  LV Creation host, time node01, 2012-11-13 12:04:40 +0100
  LV Status              available
  # open                 1
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8

and with a LVM shared on the cluster.

BUT. ...

Whats is better?

- shared LVM with FC backend
- iSCSI target with FC backend (Possible??)

and what is the 'best' LUN setup?

- Create per NODE a LUN, lets say: 200GB
- Add every LUN via LVM and share it on the cluster

At the moment we have one 1TiB SAN store and just one LUN0 active (200GB) (testing)
 
Last edited by a moderator:
...

Whats is better?

- shared LVM with FC backend
- iSCSI target with FC backend (Possible??)

and what is the 'best' LUN setup?

- Create per NODE a LUN, lets say: 200GB
- Add every LUN via LVM and share it on the cluster

At the moment we have one 1TiB SAN store and just one LUN0 active (200GB) (testing)
Hi,
to the first toppic: shared LVM with FC backend mean over FC distrubuted disks to all clusternodes (on which is the LVM)?! Then yes this is the normal, fast and stable way.
iSCSI over FC makes no sense (iSCSI is the poorman-FC to use normal ethernet). Normaly you can't run IP over FC (with trick, but not the best idea).

About LUN setup:
One LUN per node is IMHO to unflexible - I use one Lun for an storagetype (like 2TB sata raid6 as one lun and 500GB sas raid10 as second lun...) and name the VG to remember which type is it (like sata_r6_lun3 or sas_r10_lun7 - perhaps also the raid, if you have more).

Udo
 
to the first toppic: shared LVM with FC backend mean over FC distrubuted disks to all clusternodes (on which is the LVM)?! Then yes this is the normal, fast and stable way.
iSCSI over FC makes no sense (iSCSI is the poorman-FC to use normal ethernet). Normaly you can't run IP over FC (with trick, but not the best idea).


I see! i made the mistake to iSCSI vs FC ... I'm sorry ;) - so I only use FiberChannel over FiberOptical :) and NO iSCSI. (I learned yesterday more about the difference...)

On node1 i do this:

Code:
root@node01:/# lsscsi
[5:0:0:0]    disk    SCST_BIO vmfs_fast_1       200  /dev/sdd

root@node01:/# fdisk /dev/sdd
   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1       23497   188738628+  83  Linux

root@node01:/# pvcreate /dev/sdd1

root@node01:/# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               pve
  PV Size               67.25 GiB / not usable 4.00 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              17215
  Free PE               2143
  Allocated PE          15072
  PV UUID               fuFeuu-9hI8-H3xr-bxB4-ReRv-aOy8-IlnV5T

root@node01:/# vgcreate fc_san01_1 /dev/sdd1

About the LUNs :

now we have a SAN with 6x300GB at RAID5 (1.1TB) (not yet 10, i'm thinking about it to changing it...)

Is it a good idea to just give 1 lun size 1.1TB?
better question:

why should i setup MORE luns?
 
Last edited by a moderator:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!