mfsys35 multipath config

rayk_sland

Active Member
Jul 30, 2009
53
1
28
One of my customers has an intel mfsys35 running PVE 1.9 and just installed an extra SAS controller. We'd like to enable multipath drive array access at the PVE level. Any howto's or pitfalls? (seeing as the proxmox team actually has an MFSYS unit)
 
I personally never configured this but should work (our IMS has only one SAS controller so we cannot test this here)
 
One of my customers has an intel mfsys35 running PVE 1.9 and just installed an extra SAS controller. We'd like to enable multipath drive array access at the PVE level. Any howto's or pitfalls? (seeing as the proxmox team actually has an MFSYS unit)

Hi Did you ever get this to work? I have the MFSYS25 with Proxmox 1.9 and have yet to get multipath to work.
 
Hi Did you ever get this to work? I have the MFSYS25 with Proxmox 1.9 and have yet to get multipath to work.

I have a MFSYS35, which is basicly the same as the MFSYS25 but with 3.5 inch disks instead of 2.5 and I got multipath to work, but it takes a little bit of work:

- I installed Proxmox 2.0 (which is based on Debian Squeeze) without multipath, so just running of /dev/sda
- Install mutlipath tools:
Code:
apt-get install multipath-tools multipath-tools-boot libfuse2 liblzma2
- After installation, download grub2 from Debian Wheezy, as grub2 from Squeeze does not support multipath devices, with the following commands:

(Note that these links may stop working over time due to new versions released, acquire the current latest version)

Code:
mkdir /root/grub
cd /root/grub
wget http://ftp.de.debian.org/debian/pool/main/e/eglibc/multiarch-support_2.13-35_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/x/xz-utils/liblzma5_5.1.1alpha+20120614-1_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub/grub_0.97-66_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub2/grub-pc_1.99-22.1_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub2/grub-common_1.99-22.1_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub2/grub2-common_1.99-22.1_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub2/grub-pc-bin_1.99-23.1_amd64.deb
dpkg -i *.deb

- Create /etc/multipath.conf:
Code:
blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^(hd|xvd)[a-z][[0-9]*]"
        devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}

devices {
        device {
        vendor                  "Intel"
        product                 "Multi-Flex"
        path_grouping_policy    "group_by_prio"
        getuid_callout          "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
        prio                    "alua"
        path_checker            tur
        path_selector           "round-robin 0"
        hardware_handler        "1 alua"
        failback                immediate
        rr_weight               uniform
        rr_min_io               100
        no_path_retry           queue
        features                "1 queue_if_no_path"
        }
}


multipaths {
        multipath {
                wwid    CHANGE-ME
                alias   system
        }
}

The alias containing an alias name and the wwid can be modified accordingly. To get your unique wwid use:
Code:
# /lib/udev/scsi_id --whitelisted --device=/dev/sda

-(Re)start multipath:
Code:
/etc/init.d/multipath-tools-boot restart; /etc/init.d/multipath-tools restart
- Modify /etc/fstab to the new multipath devices (
Code:
/dev/mapper/system-part1        ext3    errors=remount-ro       0       0
/dev/mapper/system-part5        swap    sw              0       0
- run "update-initramfs -u" to recreate initramfs
- Disable UUID in /etc/default/grub
- Run update-grub

Now reboot, BUT press escape during GRUB. Then edit the boot path so that root=UUID=<whatever> becomes root=/dev/mapper/system-part1 (or whatever partition number is your root partition)

- After system has booted, run update-grub.
- Reboot to make sure everything is alright.

Be aware that the stock PVE kernel from Proxmox 2.1 has a bug in its MPTSAS driver causing the kernel to panic when you pull out a SCM. This has been fixed in the debian backports kernel 3.2:

- Add the following to /etc/apt/sources.list
Code:
deb http://backports.debian.org/debian-backports squeeze-backports main
- Run
Code:
apt-get -t squeeze-backports install linux-image-amd64
- Reboot

You should be all done.
 
Last edited:
...
Be aware that the stock PVE kernel from Proxmox 2.1 has a bug in its MPTSAS driver causing the kernel to panic when you pull out a SCM. ...

is this still true for 2.2, can you re-test?
 
I tested it, and grub fails to boot. I am currently trying to find the cause of this.

I am unable to reproduce my Grub problem. Perhaps I did something wrong in the process. I just installed a new blade today with Proxmox and followed my own manual as described above. Besides the grub URLs that have changed due to newly released versions, the manual is valid.

I have updated the GRUB urls.
 
is this still true for 2.2, can you re-test?

Sorry, I misread your question. I cannot re-test this, as my system is now running in production.

However, from what I can find on Google this issue was created in kernel 2.6.27 and later and probably fixed in kernel versions from 2.6.32 or higher. According to:

http://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_2.2

Proxmox VE 2.2 comes with kernel version 2.6.32, so this issue should be fixed in Proxmox VE 2.2. (dont forget to reboot to the updated kernel after you ran apt-get update && apt-get upgrade)
 
Hi Toz,

I am getting ready to try this on the mfsys25. I have a few questions before I continue and hope you can answer them:

1. how do the packages from Wheezy behave during any updates to the system.
2. How stable is your configuration how has it been running in production?
3. I noticed some of the above packages are alpha... are there any stable packages at this date?
 
Hi Petrus,

1. The package behave just as any other package, except it has a higher version number. Thus, should there be an update in the squeeze repo, then it will not be updated. However, it is very unlikely that these packages get an update. Most debian updates are security fixes in networking software. Debian updates do generally not fix any non-security bugs.

2. It has been running for a few months now, no problems whatsoever. 1 of the blades has an uptime of 152 days.

3. I haven't looked at any other versions, as these just work. But, Debian Wheezy is on its way to become the new stable. I wouldn't worry too much about the package versions that Wheezy is using. If their in Wheezy now, big change the exact same version will be in wheezy when it becomes the new stable.
 
Thanks for your answers Tozz!!

I am updating the links to the packages and cannot find http://ftp.de.debian.org/debian/pool/main/g/grub/grub_0.97-66_amd64.deb or better.

I do see this package:

Code:
http://ftp.de.debian.org/debian/pool/main/g/grub/grub-legacy_0.97-67_amd64.deb
Do you know if this package will also work or is the same?? The only other package that is similar is:

Code:
http://ftp.de.debian.org/debian/pool/main/g/grub/grub_0.97-64_amd64.deb
which is a older version.


BTW: how did you determine which packages you needed to get the right grub installed.. are these just the grub packages and dependencies? also why do you need the grub packages instead of only the grub2 packages?
 
Last edited:
Thanks for your answers Tozz!!
I am updating the links to the packages and cannot find http://ftp.de.debian.org/debian/pool/main/g/grub/grub_0.97-66_amd64.deb or better.
I do see this package:
Code:
http://ftp.de.debian.org/debian/pool/main/g/grub/grub-legacy_0.97-67_amd64.deb
Do you know if this package will also work or is the same?? The only other package that is similar is:

The package versions can change, as Debian Wheezy is not declared stable yet. So yes, you should use 0.97-67 instead of 0.97-66.

BTW: how did you determine which packages you needed to get the right grub installed.. are these just the grub packages and dependencies? also why do you need the grub packages instead of only the grub2 packages?

I just grabbed the new grub (required for multipath boot) and looked at the dependencies it requires. I took grub packages instead of grub2 because squeeze uses grub. You can probably use grub2 too if you prefer grub2.
 
I have three disks assigned to a blade:

sda - root and boot partition, some local storage for containers
sdb - vmdisks raw storage LVM group for Virtual KVM disks
sdc - vzdumps, templates, iso image shared via nfs with other PVE hosts in cluster.

I can see how to add sda and sdc to the above config since they are mounted at startup, but how do I deal with sdb? Is sdb taken care of by LVM or do I need also account for it in the multipath config?
 
The package versions can change, as Debian Wheezy is not declared stable yet. So yes, you should use 0.97-67 instead of 0.97-66.

Is there a difference between the "legacy" one vs the non legacy package.. which is at 097.64 I am asking because when Install the packages I get the following error:

Errors were encountered while processing: grub-legacy



I just grabbed the new grub (required for multipath boot) and looked at the dependencies it requires. I took grub packages instead of grub2 because squeeze uses grub. You can probably use grub2 too if you prefer grub2.

OK thanks!
 
Last edited:
You need to configure multiple multipath's in the multipath config:

Determine the www by using:

Code:
/lib/udev/scsi_id --whitelisted --device=/dev/sda
/lib/udev/scsi_id --whitelisted --device=/dev/sdb
/lib/udev/scsi_id --whitelisted --device=/dev/sdc

And then add these ID's to /etc/multipath.conf:

Code:
multipaths {         
        multipath {
                 wwid    id-of-sda
                 alias   system
         }
        multipath {
                 wwid    id-of-sdb
                 alias   vmdisks
         }
        multipath {
                 wwid    id-of-sdc
                 alias  vzdumps
         }

}
 
You need to configure multiple multipath's in the multipath config:

Determine the www by using:

Code:
/lib/udev/scsi_id --whitelisted --device=/dev/sda
/lib/udev/scsi_id --whitelisted --device=/dev/sdb
/lib/udev/scsi_id --whitelisted --device=/dev/sdc

And then add these ID's to /etc/multipath.conf:

Code:
multipaths {         
        multipath {
                 wwid    id-of-sda
                 alias   system
         }
        multipath {
                 wwid    id-of-sdb
                 alias   vmdisks
         }
        multipath {
                 wwid    id-of-sdc
                 alias  vzdumps
         }

}

Thanks Tozz.. after I posted I figured as much.. otherwise how could the other storage controller see the vmdisks drive!

Did you get this error: Errors were encountered while processing: grub-legacy ? see post #15
 
Ok I think I fixed the grub-legacy error, I had to do a

dpkg -i --auto-deconfigure grub-legacy_0.97-67_amd64.deb
and re-install grub2-common and grub-pc now I don't have any errors.
 
- Modify /etc/fstab to the new multipath devices (
Code:
/dev/mapper/system-part1        ext3    errors=remount-ro       0       0
/dev/mapper/system-part5        swap    sw              0       0


OK so my /etc/fstab currently looks like this

Code:
# <file system> <mount point> <type> <options> <dump> <pass>

/dev/pve/root / ext3 errors=remount-ro 0 1

/dev/pve/data /var/lib/vz ext3 defaults 0 1

UUID=294b7d72-4dea-4549-97c9-4320a55f3fb5 /boot ext3 defaults 0 1

/dev/pve/swap none swap sw 0 0

proc /proc proc defaults 0 0

UUID=777b58c2-c780-41c3-919f-3640e4f5cce6 /srv ext3 defaults 0 2

I am bit confused about what I should do with the boot mount entries etc, what should my /etc/fstab look like?

Here is my df -h

Code:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/pve-root   51G  2.1G   47G   5% /
tmpfs                  24G     0   24G   0% /lib/init/rw
udev                   24G  284K   24G   1% /dev
tmpfs                  24G   44M   24G   1% /dev/shm
/dev/mapper/pve-data  112G  823M  111G   1% /var/lib/vz
/dev/sda1             495M   94M  377M  20% /boot
/dev/sdc1             549G  467G   55G  90% /srv
/dev/fuse              30M   32K   30M   1% /etc/pve


and here is my partial fdisk -l

Code:
Disk /dev/sda: 222.2 GB, 222189395968 bytes
255 heads, 63 sectors/track, 27012 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00096417


   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          66      523264   83  Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2              66       27013   216457216   8e  Linux LVM


Disk /dev/sdb: 1198.0 GB, 1197995228160 bytes
255 heads, 63 sectors/track, 145647 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x8aa2e50f


   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1      145647  1169909496   83  Linux


Disk /dev/sdc: 598.0 GB, 597999034368 bytes
255 heads, 63 sectors/track, 72702 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf6f9fc2c


   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1       72702   583978783+  83  Linux
 
OK so my /etc/fstab currently looks like this

Code:
# <file system> <mount point> <type> <options> <dump> <pass>

/dev/pve/root / ext3 errors=remount-ro 0 1

/dev/pve/data /var/lib/vz ext3 defaults 0 1

UUID=294b7d72-4dea-4549-97c9-4320a55f3fb5 /boot ext3 defaults 0 1

/dev/pve/swap none swap sw 0 0

proc /proc proc defaults 0 0

UUID=777b58c2-c780-41c3-919f-3640e4f5cce6 /srv ext3 defaults 0 2

I am bit confused about what I should do with the boot mount entries etc, what should my /etc/fstab look like?

OK I figured it out

the new fstab now looks like this:


Code:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1


#UUID=294b7d72-4dea-4549-97c9-4320a55f3fb5 /boot ext3 defaults 0 1
/dev/mapper/system-part1 /boot ext3 defaults 0 1


/dev/pve/swap none swap sw 0 0


proc /proc proc defaults 0 0


#UUID=777b58c2-c780-41c3-919f-3640e4f5cce6 /srv ext3 defaults 0 2
/dev/mapper/vzdumps-part1 /srv ext3 defaults 0 2
I left the old entries commented out in case I wanted to reverse my work.


Multi-Path now works!!! Thanks Tozz for all of your help!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!