Hi Marco,
I have a couple of servers booting from an EMC fibre channel SAN via Qlogic QLA2342 2Gbps Fibre HBA's. I ran into similar problems in the beginning and this is the solution I found:
Boot after install fails, probably due to LUN not being available yet. The kernel has to wait a couple of seconds for the LUN to get ready. Enter the GRUB menu at boot by hitting the "e" key, and edit the boot commandline to include the highlighed parameter:
linux /vmlinuz-2.6 [...] ro
rootdelay=10
To make this change permanent use one of the following methods when the system is done booting:
# sed -i -e "s/DEFAULT=\"quiet\"/DEFAULT=\"rootdelay=10\"/" /etc/default/grub
# update-grub
OR edit the following files so that resp. files contain the line shown below resp. command:
# nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="rootdelay=10"
# nano /etc/grub.d/10_linux
linux ${rel_dirname}/${basename} root=${linux_root_device_thisversion} ro rootdelay=10 ${args}
# update-grub
OR change the start parameters for GRUB directly:
# nano /boot/grub/grub.cfg
I prefer the first variant as it should have the highest probability to survive a system update

Either one will enable Proxmox to find the boot LUN
IF you have set up the LUN:s
and masking / zoning properly
and set the HBA:s BIOS to enable boot support.
Now if you want to have support for multipathing you need to do some more tweaking (the following example is for EMC's Clariion FC SAN:s with support for ALUA).
Create a config file for multipath:
# nano /etc/multipath.conf
defaults {
user_friendly_names yes
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|nbd)[0-9]*"
devnode "^(xvd|vd)[a-z]*"
devnode "^hd[a-z][[0-9]*]"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
devnode "^dcssblk[0-9]*"
devnode "^etherd"
device {
vendor "DGC"
product "LUNZ"
}
}
blacklist_exceptions {
# wwid "*"
}
devices {
device {
vendor "DGC"
product ".*"
product_blacklist "LUNZ"
hardware_handler "1 emc"
features "1 queue_if_no_path"
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
prio emc
path_grouping_policy group_by_prio
path_checker emc_clariion
path_selector "round-robin 0"
rr_weight uniform
polling_interval 2
no_path_retry 60
dev_loss_tmo 120
failback immediate
}
}
multipaths {
multipath {
wwid "S_BOOT"
alias "boot"
}
multipath {
wwid "S_REPO"
alias "repository"
}
}
Replace the dummy placeholder for the alias ("S_BOOT") in the file with the WWID for the boot LUN:
# sed -i -e "s/S_BOOT/`/lib/udev/scsi_id --whitelisted --device=/dev/sda | awk '{print $1}'`/" /etc/multipath.conf
Write correct UUID for the boot LUN to /etc/fstab:
# sed -i -e "s/\/dev\/sda1/`blkid | grep -m 1 sd.1 | awk '{print $2}'`/" /etc/fstab
Update initramfs to enable multipath support at re-boot:
# update-initramfs -c -t -k `uname -r`
Update the system and reboot, it should come up without problems::
# aptitude update
# aptitude full-upgrade
# reboot
Install multipath:
# aptitude install multipath-tools-boot
Reboot again!
# reboot
Tidy up the mess that multipath install created:
# dpkg --configure -a
Edit the configuration file for LVM so that it will only "see" the multipath connections:
# sed -r -i -e "s/^([ ]*filter = )(.*)/\1[ \"a|\/dev\/disk\/by-id\/dm-uuid-.*-mpath-.*|\", \"r|.*|\" ]/" /etc/lvm/lvm.conf
# sed -r -i -e "s/^([ ]*)# (types = )(.*)/\1\2[ \"device-mapper\", 1 ]/" /etc/lvm/lvm.conf
OR
# nano /etc/lvm/lvm.conf
filter = [ "a|/dev/disk/by-id/dm-uuid-.*-mpath-.*|", "r|.*|" ]
types = [ "device-mapper", 1 ]
Update initramfs once again:r:
# update-initramfs -c -t -k `uname -r`