HP MSA1000 SAN storage.

mstng_67

New Member
Feb 9, 2011
10
0
1
Greetings!

I am having a problem getting MSA1000 disk exposed to my Proxmox 1.7 host. I am running PM on a Dell M600 series blade server with a QLogic QME2572 HBA. Here is an applicable excerpt from the kernel message log:

Code:
May  2 16:58:11 pc-proxmox2 kernel: QLogic Fibre Channel HBA Driver: 8.03.01-k6
May  2 16:58:11 pc-proxmox2 kernel: QLogic Fibre Channel HBA Driver: 8.03.01-k6
May  2 16:58:11 pc-proxmox2 kernel:  QLogic QME2572 - PCI-Express Dual Channel 8Gb Fibre Channel HBA
May  2 16:58:11 pc-proxmox2 kernel: QLogic Fibre Channel HBA Driver: 8.03.01-k6
May  2 16:58:11 pc-proxmox2 kernel:  QLogic QME2572 - PCI-Express Dual Channel 8Gb Fibre Channel HBA

It appears to me that the kernel has successfully recognized the HBA and loaded the appropriate driver. Am I mistaken? Is there something else I need to do?

Here is some additional information that may be useful. I also have a Windows 2008 x64 blade that has the same HBA. I can expose the MSA1000 storage to it without a problem. It may be noteworthy that I did install the HP DSM (Device Specific Module) on the Windows host.

If anyone out there has experience with PM and SAN storage that can provide some guidance, I'd greatly appreciate it.

Very best regards.
 
I have some additional information that may be helpful. I have another blade from the same enclosure using the same HBA running Citrix XenServer (which is based on Red Hat...or some I'm told). I can expose the SAN storage to that OS as well. I also have an old HP DL380 with an Emulex HBA running Ubuntu Server 10.10 that can access the SAN storage.

In summary:
Windows 2008 x64 = access
Ubuntu Server 10.10 x86 = access
Citrix XenServer 5.6 (read Red Hat) = access
Proxmox 1.7 = NO ACCESS

It seems that the only OS that is unable to access the SAN is Proxmox 1.7. Is there a driver-related issue with this version of the distro? Is there anything that can be done?
 
I have some additional information that may be helpful. I have another blade from the same enclosure using the same HBA running Citrix XenServer (which is based on Red Hat...or some I'm told). I can expose the SAN storage to that OS as well. I also have an old HP DL380 with an Emulex HBA running Ubuntu Server 10.10 that can access the SAN storage.

In summary:
Windows 2008 x64 = access
Ubuntu Server 10.10 x86 = access
Citrix XenServer 5.6 (read Red Hat) = access
Proxmox 1.7 = NO ACCESS

It seems that the only OS that is unable to access the SAN is Proxmox 1.7. Is there a driver-related issue with this version of the distro? Is there anything that can be done?
Hi,
i use FC-SAN with proxmox since a long time without any issues. Works fast and reliable!
Do you have firmware below /lib/firmware (like ql2300... for qlogic)?
You have assigned a lun of the SAN-Storage to your wwn of the proxmox-host? E.G. you see the wwn on the Storage? If not, enable the traffic between proxmox-node and San on the FC-Switch.

Do you have tried "scsiadd -sp" on the proxmox-node? (apt-get install scsiadd).

Udo
 
Greetings!

I am having a problem getting MSA1000 disk exposed to my Proxmox 1.7 host. I am running PM on a Dell M600 series blade server with a QLogic QME2572 HBA. Here is an applicable excerpt from the kernel message log:

Code:
May  2 16:58:11 pc-proxmox2 kernel: QLogic Fibre Channel HBA Driver: 8.03.01-k6
May  2 16:58:11 pc-proxmox2 kernel: QLogic Fibre Channel HBA Driver: 8.03.01-k6
May  2 16:58:11 pc-proxmox2 kernel:  QLogic QME2572 - PCI-Express Dual Channel 8Gb Fibre Channel HBA
May  2 16:58:11 pc-proxmox2 kernel: QLogic Fibre Channel HBA Driver: 8.03.01-k6
May  2 16:58:11 pc-proxmox2 kernel:  QLogic QME2572 - PCI-Express Dual Channel 8Gb Fibre Channel HBA

It appears to me that the kernel has successfully recognized the HBA and loaded the appropriate driver. Am I mistaken? Is there something else I need to do?

Here is some additional information that may be useful. I also have a Windows 2008 x64 blade that has the same HBA. I can expose the MSA1000 storage to it without a problem. It may be noteworthy that I did install the HP DSM (Device Specific Module) on the Windows host.

If anyone out there has experience with PM and SAN storage that can provide some guidance, I'd greatly appreciate it.

Very best regards.
Hi,
one thing left. Perhaps your hba is too new?
What driver shows "lspci -v"?
Like this:
Code:
lspci -v
...
02:00.1 Fibre Channel: QLogic Corp. ISP2432-based 4Gb Fibre Channel to PCI Express HBA (rev 03)
        Subsystem: QLogic Corp. Device 0138
        Flags: bus master, fast devsel, latency 0, IRQ 86
        I/O ports at 9400 [size=256]
        Memory at fbef8000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at fbe40000 [disabled] [size=256K]
        Capabilities: [44] Power Management version 2
        Capabilities: [4c] Express Endpoint, MSI 00
        Capabilities: [64] Message Signalled Interrupts: Mask- 64bit+ Queue=0/4 Enable+
        Capabilities: [74] Vital Product Data <?>
        Capabilities: [7c] MSI-X: Enable- Mask- TabSize=16
        Capabilities: [100] Advanced Error Reporting <?>
        Capabilities: [138] Power Budgeting <?>
        Kernel driver in use: [B]qla2xxx[/B]
        Kernel modules: qla2xxx

Udo
 
Do you have firmware below /lib/firmware (like ql2300... for qlogic)?

Yes. Well, it seems to be.

Code:
pc-proxmox2:/etc/pve# ls /lib/firmware/ql*.bin
/lib/firmware/ql2100_fw.bin  /lib/firmware/ql2200_fw.bin  /lib/firmware/ql2300_fw.bin  /lib/firmware/ql2322_fw.bin  /lib/firmware/ql2400_fw.bin  /lib/firmware/ql2500_fw.bin
Code:
pc-proxmox2:/etc/pve# ls /lib/firmware/qlogic/
1040.bin  12160.bin  1280.bin  sd7220.fw

You have assigned a lun of the SAN-Storage to your wwn of the proxmox-host? E.G. you see the wwn on the Storage? If not, enable the traffic between proxmox-node and San on the FC-Switch.

I don't believe that this is necessary at this point. Everything is default; which AFAIK, means allow everything to connect. I don't believe that any manual configuration of the switch is necessary at this point. Later I may want to lock things down for security purposes, but for now everything is wide open.

Do you have tried "scsiadd -sp" on the proxmox-node? (apt-get install scsiadd).

Here is the output:

Code:
pc-proxmox2:/etc/pve# scsiadd -sp
scsiadd:scsidump():could not open /proc/scsi/scsi (r): No such file or directory

What driver shows "lspci -v"?

Here is the output:

Code:
09:00.1 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02)
        Subsystem: QLogic Corp. Device 0167
        Flags: bus master, fast devsel, latency 0, IRQ 19
        I/O ports at dc00 [size=256]
        Memory at de3fc000 (64-bit, non-prefetchable) [size=16K]
        Expansion ROM at de500000 [disabled] [size=256K]
        Capabilities: [44] Power Management version 3
        Capabilities: [4c] Express Endpoint, MSI 00
        Capabilities: [88] Message Signalled Interrupts: Mask- 64bit+ Queue=0/5 Enable-
        Capabilities: [98] Vital Product Data <?>
        Capabilities: [a0] MSI-X: Enable+ Mask- TabSize=2
        Capabilities: [100] Advanced Error Reporting <?>
        Capabilities: [138] Power Budgeting <?>
        Kernel driver in use: qla2xxx
        Kernel modules: qla2xxx
 
...
I don't believe that this is necessary at this point. Everything is default; which AFAIK, means allow everything to connect. I don't believe that any manual configuration of the switch is necessary at this point. Later I may want to lock things down for security purposes, but for now everything is wide open.
If a disk is offered to your proxmox-node you are right.
Here is the output:
Code:
pc-proxmox2:/etc/pve# scsiadd -sp
scsiadd:scsidump():could not open /proc/scsi/scsi (r): No such file or directory
...
no /proc/scsi ?! That's strange.
Look with "dmesg | grep scsi"
and do you see a connected loop? Look with "dmesg | grep qla". There must something like
Code:
qla2xxx 0000:02:00.0: LOOP UP detected (4 Gbps).
BTW. on the FC switch, you can normaly configure different modi for the port (loop...) Perhaps you must change there the settings.

Udo
 
in any case, do not use outdated version - go for the current 1.8.
 
If a disk is offered to your proxmox-node you are right.

no /proc/scsi ?! That's strange.
Look with "dmesg | grep scsi"
and do you see a connected loop? Look with "dmesg | grep qla". There must something like
Code:
qla2xxx 0000:02:00.0: LOOP UP detected (4 Gbps).
BTW. on the FC switch, you can normaly configure different modi for the port (loop...) Perhaps you must change there the settings.

Udo

Udo,

Thanks for the assistance. I believe that I've corrected the issue. I was seeing "LOOP UP" followed immediately by "a cable is unplugged." So I swapped out the fibre cable and rebooted the server. Viola!

I do have a few other questions that you may be able to answer. For some reason the server seems to require a reboot in order to "see" the disk after some kind of configuration change. Is there a way to cause the driver to recheck the HBA without rebooting the server?

Also, I am curious if there is a way to set up shared SAN storage for live migration as is done for iSCSI and NFS? Since you are running SAN storage in your PM environment, I thought you'd be a good one to ask.

I really appreciate your support. Believe it or not, I've received better support here than I did with VMWare. We recently went through a migration process where we nearly completely dumped VMWare for PM. We have only one remaining VMWare host. The only reason we have that is because we have a third party software vendor that will not support any other platform. Other than that we've gone PM entirely on the server side. Excellent product!

Very best regards.
 
Udo,

Thanks for the assistance. I believe that I've corrected the issue. I was seeing "LOOP UP" followed immediately by "a cable is unplugged." So I swapped out the fibre cable and rebooted the server. Viola!
hi,
fine that's worked now.
I do have a few other questions that you may be able to answer. For some reason the server seems to require a reboot in order to "see" the disk after some kind of configuration change. Is there a way to cause the driver to recheck the HBA without rebooting the server?
yes, use scsiadd - with this tool you can scan for new devices.
Also, I am curious if there is a way to set up shared SAN storage for live migration as is done for iSCSI and NFS? Since you are running SAN storage in your PM environment, I thought you'd be a good one to ask.
fc-san-raids are a good choice for shared storage!
1. create a lun on a fc-raid - depends on your usecase. Best io-results you get with sas-disks and raid-10. If you mainly need big storage sata and raid-6 is perhaps enough.
2. Allow on the FC-raid the access from all proxmox-nodes (from this cluster) to this lun.
3. check on the master for the disk (scsiadd, fdisk -l) and create an volumegroup on the lun, like (e.g. disk is sdb)
Code:
pvcreate /dev/sdb
vgcreate -s 4M fc_sata_lun1 /dev/sdb
You can use of course a partition on the disk (type 8e) to easy see that this is an lvm-disk, but for partition bigger than 2TB you need a gpt-partition.
After that, go in the web-gui, storage and add the lvm as shared storage.
Than you can create VMs and use this storage for the disks (a VM-disk is an logical volume on the lvm-storage).
Live-Migration work normaly very well!
I really appreciate your support. Believe it or not, I've received better support here than I did with VMWare. We recently went through a migration process where we nearly completely dumped VMWare for PM. We have only one remaining VMWare host. The only reason we have that is because we have a third party software vendor that will not support any other platform. Other than that we've gone PM entirely on the server side. Excellent product!

Very best regards.
yes, vmware is ok, but proxmox is fun!

Udo