ProxMox on a direct attached shared storage system

Hi David

Sorry I do not have the the extra storage module so I'm unable to test this as yet. I think Petrus is more likely to get to it before me. :)

Mike
 
I actual think the setup could be very simple, I downloaded the Intel package which adds the support for multipath and pulled the package part. It was a very small and simple package.

It had a dependency for the multipath package and a very simple config file.

You will need to install the Debian Lenny's multipath-tools package and then add the config which as I said looked very simple. The biggest issue I could see is making sure that you do not uses the native dev interfaces for the storage controller.

Mike
 
Okay, thanks, maybe I'm expecting more trouble than I will actually encounter :)
 
It's not going so well with the dual storage controllers.

I used the procedure on Intel's support website for SLES 11 (http://download.intel.com/support/motherboards/server/sb/sles11_mpio_setup_bkm_v2.pdf) as a basis, as well as their MPIO configuration file sample for SUSE 11.

Instead of installing the multipath tools from RPM, I did:

Code:
  apt-get install multipath-tools multipath-tools-boot
As part of this it tried to do update-initramfs, which failed with the suggestion to run it with the -t option. I manually ran

Code:
   update-initramfs -c -t -k 2.6.18-2-pve
I'm currently stuck on two issues:

(1) GRUB needs to use the multipath devices instead of the raw devices, but how?I tried putting the relevant /dev/mapper or /dev/disk/by-id device in the GRUB device.map for (hd0), but that fails. For example (multipath daemon is running, system not rebooted yet):

Code:
  proxmox-ims-1:~# ls -al /dev/mapper
 total 0
 drwxr-xr-x  2 root root     160 Apr 23 22:38 .
 drwxr-xr-x 15 root root   13880 Apr 24 06:25 ..
 brw-rw----  1 root disk 253,  3 Apr 23 22:38 222100001551e1c5c
 brw-rw----  1 root disk 253,  4 Apr 23 22:38 2228e00015529cd2b
 crw-rw----  1 root root  10, 63 Apr 22 16:36 control
 brw-rw----  1 root disk 253,  2 Apr 22 16:36 pve-data
 brw-rw----  1 root disk 253,  1 Apr 22 16:36 pve-root
 brw-rw----  1 root disk 253,  0 Apr 22 16:36 pve-swap
 proxmox-ims-1:~# cat /boot/grub/device.map
 (hd0)   /dev/mapper/2228e00015529cd2b
 proxmox-ims-1:~# grub --device-map=/boot/grub/device.map
 Unknown partition table signature
     GNU GRUB  version 0.97  (640K lower / 3072K upper memory)
        [ Minimal BASH-like line editing is supported.   For
          the   first   word,  TAB  lists  possible  command
          completions.  Anywhere else TAB lists the possible
          completions of a device/filename. ]
 grub> root (hd0)
 root (hd0)
  Filesystem type unknown, using whole disk
 grub> setup (hd0,0)
 setup (hd0,0)
 Error 17: Cannot mount selected partition
The Debian wiki suggests I need a patched GRUB for this to work (http://wiki.debian.org/DebianInstaller/MultipathSupport). I downloaded the source tarball from http://git.debian.org/?p=users/agx/grub-legacy.git and did

Code:
apt-get install build-essential gcc-multilib
 ./configure
 make
 make install
Now running GRUB gives:

Code:
proxmox-ims-1:~/grub-legacy# grub --device-map=/boot/grub/device.map
     GNU GRUB  version 0.97  (640K lower / 3072K upper memory)
  [ Minimal BASH-like line editing is supported.  For the first word, TAB
    lists possible command completions.  Anywhere else TAB lists the possible
    completions of a device/filename. ]
 grub> root (hd0)
 root (hd0)
 Floating point exception
I have no clues for now on how to proceed, suggestions are welcome...


(2) LVM uses the raw partition /dev/sda2 as PV, but how to make it use the multipath device instead? It doesn't detect the multipath devices as possible PV in the first place.

Anyone know how to do this?


Thanks,
David
 
Small update:

On (1) above: seems that there is no GRUB update necessary! It also helps to type *correct* commands: root (hd0,0) instead of root (hd0), for example...

On (2) above: seems to automagically work.

I'll continue the testing, but it seems it's working :)
 
Hi David,

I used the procedure on Intel's support website for SLES 11 (http://download.intel.com/support/motherboards/server/sb/sles11_mpio_setup_bkm_v2.pdf) as a basis, as well as their MPIO configuration file sample for SUSE 11.

We are running Debian systems with virtual disks provided by the Intel Modular Server and I can't get multipath working as it should. I have only installed multipath-tools for now to avoid problems with booting as long as I'm not confident that regular multipath works.

Would you mind posting your complete /etc/multipath.conf here to allow comparing and hopefully see what I got wrong? My devices section in the configuration file looks like this:

Code:
devices {
        device {
                vendor                  "Intel"
                product                 "Multi-Flex"
                path_grouping_policy    group_by_prio
                getuid_callout          "/lib/udev/scsi_id -g -u -d /dev/%n"
                #prio_callout           "/sbin/mpath_prio_intel /dev/%n"
                prio_callout            "/sbin/mpath_prio_balance_units %d"
                path_checker            tur
                path_selector           "round-robin 0"
#               hardware_handler        "1 alua"
                failback                immediate
                rr_weight               uniform
                rr_min_io               100
                no_path_retry           queue
                features                "1 queue_if_no_path"
        }
}
I have the hardware_handler option commented out above, because multipath reports "unknown hardware handler type" otherwise, and no multipathing is set up at all. Also, there is no /sbin/mpath_prio_intel that's why I'm using mpath_prio_balance_units above. What about you?

BTW, what about the steps 9 to 11 (kernel modules, mkinitrd) in Intel's MPIO config procedure above? They don't apply to Debian, do they? Did you disregard them too?

I attach output of multipath -ll v3 for sake of completeness. There are 2 virtual disks with 2 paths each (sda+sdb = system disk, sdc+sdd = database disk), but multipath seems to be set up correctly for sdd only.
Code:
[SIZE=1]~# multipath -ll -v3
ram0: device node name blacklisted
ram1: device node name blacklisted
ram2: device node name blacklisted
ram3: device node name blacklisted
ram4: device node name blacklisted
ram5: device node name blacklisted
ram6: device node name blacklisted
ram7: device node name blacklisted
ram8: device node name blacklisted
ram9: device node name blacklisted
ram10: device node name blacklisted
ram11: device node name blacklisted
ram12: device node name blacklisted
ram13: device node name blacklisted
ram14: device node name blacklisted
ram15: device node name blacklisted
dm-0: device node name blacklisted
sda: not found in pathvec
sda: mask = 0x5
sda: dev_t = 8:0
sda: size = 524289024
sda: subsystem = scsi
sda: vendor = Intel
sda: product = Multi-Flex
sda: rev = 0302
sda: h:b:t:l = 0:0:0:0
sdb: not found in pathvec
sdb: mask = 0x5
sdb: dev_t = 8:16
sdb: size = 1677723648
sdb: subsystem = scsi
sdb: vendor = Intel
sdb: product = Multi-Flex
sdb: rev = 0302
sdb: h:b:t:l = 0:0:0:1
sdc: not found in pathvec
sdc: mask = 0x5
sdc: dev_t = 8:32
sdc: size = 524289024
sdc: subsystem = scsi
sdc: vendor = Intel
sdc: product = Multi-Flex
sdc: rev = 0302
sdc: h:b:t:l = 0:0:1:0
dm-1: device node name blacklisted
sdd: not found in pathvec
sdd: mask = 0x5
sdd: dev_t = 8:48
sdd: size = 1677723648
sdd: subsystem = scsi
sdd: vendor = Intel
sdd: product = Multi-Flex
sdd: rev = 0302
sdd: h:b:t:l = 0:0:1:1
sr0: device node name blacklisted
===== paths list =====
uuid hcil    dev dev_t pri dm_st  chk_st  vend/prod/rev
     0:0:0:0 sda 8:0   -1  [undef][undef] Intel   ,Multi-Flex
     0:0:0:1 sdb 8:16  -1  [undef][undef] Intel   ,Multi-Flex
     0:0:1:0 sdc 8:32  -1  [undef][undef] Intel   ,Multi-Flex
     0:0:1:1 sdd 8:48  -1  [undef][undef] Intel   ,Multi-Flex
params = 1 queue_if_no_path 0 1 1 round-robin 0 2 1 8:16 100 8:48 100
status = 2 0 0 0 1 1 A 0 2 0 8:16 F 8 8:48 A 0
sdb: mask = 0x4
sdb: path checker = tur (controller setting)
sdb: state = 2
sdb: mask = 0x8
sdb: getprio = /sbin/mpath_prio_balance_units %d (controller setting)
sdb: prio = 1
sdd: mask = 0x4
sdd: path checker = tur (controller setting)
sdd: state = 2
sdd: mask = 0x8
sdd: getprio = /sbin/mpath_prio_balance_units %d (controller setting)
sdd: prio = 1
database (22209000155faaffa) dm-0 Intel   ,Multi-Flex
[size=800G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=2][active]
 \_ 0:0:0:1 sdb 8:16  [failed][ready]
 \_ 0:0:1:1 sdd 8:48  [active][ready][/SIZE]
Would be great to get this running with your input...
 
Hi bittner,

My multipath.conf looked like this:

Code:
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(hd|xvd)[a-z][[0-9]*]"
        devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}

devices {
        device {
        vendor                  "Intel"
        product                 "Multi-Flex"
        path_grouping_policy    "group_by_prio"
        getuid_callout          "/lib/udev/scsi_id -g -u /dev/%n"
        prio_callout            "/sbin/mpath_prio_alua /dev/%n"
        # prio                  "intel"
        path_checker            tur
        path_selector           "round-robin 0"
        hardware_handler        "1 alua"
        failback                immediate
        rr_weight               uniform
        rr_min_io               100
        no_path_retry           queue
        features                "1 queue_if_no_path"
        }
}

multipaths {
        multipath {
                wwid    2227d000155e89d48
                alias   system
        }
}
Step 9 I ignored. An equivalent of step 10 is done automatically thanks to apt-get. An equivalent for step 11 was necessary and followed from the "apt-get install multipath-tools-boot" (see my other post Re: Intel Modular Server System MFSYS25 Installation Experiences):

Code:
update-initramfs -c -t -k 2.6.18-2-pve

Hope this helps. I'm not a multipath specialist either :)

Best regards,
David
 
Wow, David, you're my hero!
My multipath.conf looked like this:

The only (significant) difference to my multipath.conf file was you had a prio_callout instead of the prio option. Now this works for me! ... for at least one disk: The status of both paths is either active or enabled now. This is true of my second disk ('database', = sdd+sdb).

Code:
~# multipath -ll
database (22209000155faaffa) dm-0 Intel   ,Multi-Flex
[size=800G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=50][active]
 \_ 0:0:1:1 sdd 8:48  [active][ready]
\_ round-robin 0 [prio=1][enabled]
 \_ 0:0:0:1 sdb 8:16  [active][ready]
What I still have unsolved is my 'system' disk (= sda+sdc, apparently) which doesn't show up at all with multipath. It is mounted and accessible as /dev/sda, while executing fdisk -l /dev/sdc yields nothing.

BTW, I've figured out without the -ll parameter the multipath -v3 command yields different, and much more (debug) output, so here it is again:

Code:
[SIZE=1]~# multipath -v3
ram0: device node name blacklisted
ram1: device node name blacklisted
ram2: device node name blacklisted
ram3: device node name blacklisted
ram4: device node name blacklisted
ram5: device node name blacklisted
ram6: device node name blacklisted
ram7: device node name blacklisted
ram8: device node name blacklisted
ram9: device node name blacklisted
ram10: device node name blacklisted
ram11: device node name blacklisted
ram12: device node name blacklisted
ram13: device node name blacklisted
ram14: device node name blacklisted
ram15: device node name blacklisted
dm-0: device node name blacklisted
sda: not found in pathvec
sda: mask = 0x1f
sda: dev_t = 8:0
sda: size = 524289024
sda: subsystem = scsi
sda: vendor = Intel
sda: product = Multi-Flex
sda: rev = 0302
sda: h:b:t:l = 0:0:0:0
sda: serial = 4C20202000000000000000009057BEB30ACB2C88
sda: getprio = /sbin/mpath_prio_alua /dev/%n (controller setting)
sda: prio = 50
sda: getuid = /lib/udev/scsi_id -g -u /dev/%n (controller setting)
sda: uid = 222ef0001555ab385 (callout)
sdb: not found in pathvec
sdb: mask = 0x1f
sdb: dev_t = 8:16
sdb: size = 1677723648
sdb: subsystem = scsi
sdb: vendor = Intel
sdb: product = Multi-Flex
sdb: rev = 0302
sdb: h:b:t:l = 0:0:0:1
sdb: serial = 4C202020000000000000000000B0B8C1B122968F
sdb: getprio = /sbin/mpath_prio_alua /dev/%n (controller setting)
sdb: prio = 1
sdb: getuid = /lib/udev/scsi_id -g -u /dev/%n (controller setting)
sdb: uid = 22209000155faaffa (callout)
sdc: not found in pathvec
sdc: mask = 0x1f
sdc: dev_t = 8:32
sdc: size = 524289024
sdc: subsystem = scsi
sdc: vendor = Intel
sdc: product = Multi-Flex
sdc: rev = 0302
sdc: h:b:t:l = 0:0:1:0
sdc: serial = 4C20202000000000000000009057BEB30ACB2C88
sdc: getprio = /sbin/mpath_prio_alua /dev/%n (controller setting)
sdc: prio = 1
sdc: getuid = /lib/udev/scsi_id -g -u /dev/%n (controller setting)
sdc: uid = 222ef0001555ab385 (callout)
sdd: not found in pathvec
sdd: mask = 0x1f
sdd: dev_t = 8:48
sdd: size = 1677723648
sdd: subsystem = scsi
sdd: vendor = Intel
sdd: product = Multi-Flex
sdd: rev = 0302
sdd: h:b:t:l = 0:0:1:1
sdd: serial = 4C202020000000000000000000B0B8C1B122968F
sdd: getprio = /sbin/mpath_prio_alua /dev/%n (controller setting)
sdd: prio = 50
sdd: getuid = /lib/udev/scsi_id -g -u /dev/%n (controller setting)
sdd: uid = 22209000155faaffa (callout)
dm-1: device node name blacklisted
sr0: device node name blacklisted
===== paths list =====
uuid              hcil    dev dev_t pri dm_st  chk_st  vend/prod/rev
222ef0001555ab385 0:0:0:0 sda 8:0   50  [undef][undef] Intel   ,Multi-Flex
22209000155faaffa 0:0:0:1 sdb 8:16  1   [undef][undef] Intel   ,Multi-Flex
222ef0001555ab385 0:0:1:0 sdc 8:32  1   [undef][undef] Intel   ,Multi-Flex
22209000155faaffa 0:0:1:1 sdd 8:48  50  [undef][undef] Intel   ,Multi-Flex
params = 1 queue_if_no_path 0 2 1 round-robin 0 1 1 8:48 100 round-robin 0 1 1 8:16 100
status = 2 0 0 0 2 1 A 0 1 0 8:48 A 0 E 0 1 0 8:16 A 0
sdd: mask = 0x4
sdd: path checker = tur (controller setting)
sdd: state = 2
sdb: mask = 0x4
sdb: path checker = tur (controller setting)
sdb: state = 2
sda: ownership set to system
sda: not found in pathvec
sda: mask = 0xc
sda: path checker = tur (controller setting)
sda: state = 2
sda: prio = 50
sdc: ownership set to system
sdc: not found in pathvec
sdc: mask = 0xc
sdc: path checker = tur (controller setting)
sdc: state = 2
sdc: prio = 1
system: pgfailback = -2 (controller setting)
system: pgpolicy = group_by_prio (controller setting)
system: selector = round-robin 0 (controller setting)
system: features = 1 queue_if_no_path (controller setting)
system: hwhandler = 0 (internal default)
system: rr_weight = 1 (internal default)
system: minio = 100 (controller setting)
system: no_path_retry = -2 (controller setting)
pg_timeout = NONE (internal default)
system: set ACT_CREATE (map does not exist)
system: domap (0) failure for create/reload map
sda: ownership set to system
sda: not found in pathvec
sda: mask = 0xc
sda: path checker = tur (controller setting)
sda: state = 2
sda: getprio = /sbin/mpath_prio_alua /dev/%n (controller setting)
sda: prio = 50
sdc: ownership set to system
sdc: not found in pathvec
sdc: mask = 0xc
sdc: path checker = tur (controller setting)
sdc: state = 2
sdc: getprio = /sbin/mpath_prio_alua /dev/%n (controller setting)
sdc: prio = 1
system: pgfailback = -2 (controller setting)
system: pgpolicy = group_by_prio (controller setting)
system: selector = round-robin 0 (controller setting)
system: features = 1 queue_if_no_path (controller setting)
system: hwhandler = 0 (internal default)
system: rr_weight = 1 (internal default)
system: minio = 100 (controller setting)
system: no_path_retry = -2 (controller setting)
pg_timeout = NONE (internal default)
system: set ACT_CREATE (map does not exist)
system: domap (0) failure for create/reload map
[/SIZE]
Any ideas on why the system disk is disregarded by multipath?

Interestingly, the /var/lib/multipath/bindings file does list both devices with their correct IDs. What's maybe worth noting too: when I run multipath -F to flush all device maps after changing the configuration file the program's exit status is 1 (FAIL) instead of 0, when I run simply multipath afterwards the exit status is 0 (SUCCESS), but the device-mapper writes error messages to syslog.

Code:
[SIZE=1][B]~#[/B] multipath -F || echo [I]FAIL[/I] && date
[I]FAIL[/I]
Wed Jul 28 15:24:54 CEST 2010
[B]~#[/B] multipath && echo [I]SUCCESS[/I] && date && tail /var/log/syslog
create: database (22209000155faaffa)  Intel   ,Multi-Flex
[size=800G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=50][undef]
 \_ 0:0:1:1 sdd 8:48  [undef][ready]
\_ round-robin 0 [prio=1][undef]
 \_ 0:0:0:1 sdb 8:16  [undef][ready]
libdevmapper: libdm-common.c(312): Created /dev/mapper/database
[I]SUCCESS[/I]
Wed Jul 28 15:25:05 CEST 2010
Jul 28 15:24:36 cmn1 kernel: [1927061.414134] device-mapper: ioctl: error adding target to table
Jul 28 15:25:05 cmn1 kernel: [1927105.721377] device-mapper: table: 254:0: multipath: error getting device
Jul 28 15:25:05 cmn1 kernel: [1927105.721377] device-mapper: ioctl: error adding target to table
Jul 28 15:25:05 cmn1 kernel: [1927105.729378] device-mapper: table: 254:1: multipath: error getting device
Jul 28 15:25:05 cmn1 kernel: [1927105.729378] device-mapper: ioctl: error adding target to table
[B]~#[/B] [/SIZE]
Does this ring a bell to anyone?
 
Wow, David, you're my hero!

Glad it helped :)

What I still have unsolved is my 'system' disk (= sda+sdc, apparently) which doesn't show up at all with multipath. It is mounted and accessible as /dev/sda, while executing fdisk -l /dev/sdc yields nothing.

Is that the disk you're booting from? Have you installed multipath-tools-boot already, so it plays nicely with the multipath? That's the only thing that comes to mind that may be wrong in your set-up. If that doesn't do it, then I have no clue either, and I'll stop being the hero for today...
 
What's maybe worth noting too: when I run multipath -F to flush all device maps after changing the configuration file the program's exit status is 1 (FAIL) instead of 0

I've figured out that the exit status for -F is indeed wrong(ly reported as 1, always), so this is actually a small bug in version v0.4.8 of multipath-tools which has already been fixed in the current version v0.4.9. See source code, /multipath/main.c, line 415 (v0.4.8) vs. 450 (v0.4.9).
 
Last edited:
Added HOW-TO to WIKI for Multi-path

I created a Multi-Path How-to specific for the INTEL MFSYS25/35 on the PVE Wiki HERE please take a look and provide any feedback for improvements!

Thanks everyone for your help!!
 
Re: Added HOW-TO to WIKI for Multi-path

thanks for the wiki update!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!