One of my customers has an intel mfsys35 running PVE 1.9 and just installed an extra SAS controller. We'd like to enable multipath drive array access at the PVE level. Any howto's or pitfalls? (seeing as the proxmox team actually has an MFSYS unit)
One of my customers has an intel mfsys35 running PVE 1.9 and just installed an extra SAS controller. We'd like to enable multipath drive array access at the PVE level. Any howto's or pitfalls? (seeing as the proxmox team actually has an MFSYS unit)
Hi Did you ever get this to work? I have the MFSYS25 with Proxmox 1.9 and have yet to get multipath to work.
apt-get install multipath-tools multipath-tools-boot libfuse2 liblzma2
mkdir /root/grub
cd /root/grub
wget http://ftp.de.debian.org/debian/pool/main/e/eglibc/multiarch-support_2.13-35_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/x/xz-utils/liblzma5_5.1.1alpha+20120614-1_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub/grub_0.97-66_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub2/grub-pc_1.99-22.1_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub2/grub-common_1.99-22.1_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub2/grub2-common_1.99-22.1_amd64.deb
wget http://ftp.de.debian.org/debian/pool/main/g/grub2/grub-pc-bin_1.99-23.1_amd64.deb
dpkg -i *.deb
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^(hd|xvd)[a-z][[0-9]*]"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}
devices {
device {
vendor "Intel"
product "Multi-Flex"
path_grouping_policy "group_by_prio"
getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
prio "alua"
path_checker tur
path_selector "round-robin 0"
hardware_handler "1 alua"
failback immediate
rr_weight uniform
rr_min_io 100
no_path_retry queue
features "1 queue_if_no_path"
}
}
multipaths {
multipath {
wwid CHANGE-ME
alias system
}
}
# /lib/udev/scsi_id --whitelisted --device=/dev/sda
/etc/init.d/multipath-tools-boot restart; /etc/init.d/multipath-tools restart
/dev/mapper/system-part1 ext3 errors=remount-ro 0 0
/dev/mapper/system-part5 swap sw 0 0
deb http://backports.debian.org/debian-backports squeeze-backports main
apt-get -t squeeze-backports install linux-image-amd64
...
Be aware that the stock PVE kernel from Proxmox 2.1 has a bug in its MPTSAS driver causing the kernel to panic when you pull out a SCM. ...
is this still true for 2.2, can you re-test?
I tested it, and grub fails to boot. I am currently trying to find the cause of this.
is this still true for 2.2, can you re-test?
http://ftp.de.debian.org/debian/pool/main/g/grub/grub-legacy_0.97-67_amd64.deb
http://ftp.de.debian.org/debian/pool/main/g/grub/grub_0.97-64_amd64.deb
Thanks for your answers Tozz!!
I am updating the links to the packages and cannot find http://ftp.de.debian.org/debian/pool/main/g/grub/grub_0.97-66_amd64.deb or better.
I do see this package:
Do you know if this package will also work or is the same?? The only other package that is similar is:Code:http://ftp.de.debian.org/debian/pool/main/g/grub/grub-legacy_0.97-67_amd64.deb
BTW: how did you determine which packages you needed to get the right grub installed.. are these just the grub packages and dependencies? also why do you need the grub packages instead of only the grub2 packages?
The package versions can change, as Debian Wheezy is not declared stable yet. So yes, you should use 0.97-67 instead of 0.97-66.
Errors were encountered while processing: grub-legacy
I just grabbed the new grub (required for multipath boot) and looked at the dependencies it requires. I took grub packages instead of grub2 because squeeze uses grub. You can probably use grub2 too if you prefer grub2.
/lib/udev/scsi_id --whitelisted --device=/dev/sda
/lib/udev/scsi_id --whitelisted --device=/dev/sdb
/lib/udev/scsi_id --whitelisted --device=/dev/sdc
multipaths {
multipath {
wwid id-of-sda
alias system
}
multipath {
wwid id-of-sdb
alias vmdisks
}
multipath {
wwid id-of-sdc
alias vzdumps
}
}
You need to configure multiple multipath's in the multipath config:
Determine the www by using:
Code:/lib/udev/scsi_id --whitelisted --device=/dev/sda /lib/udev/scsi_id --whitelisted --device=/dev/sdb /lib/udev/scsi_id --whitelisted --device=/dev/sdc
And then add these ID's to /etc/multipath.conf:
Code:multipaths { multipath { wwid id-of-sda alias system } multipath { wwid id-of-sdb alias vmdisks } multipath { wwid id-of-sdc alias vzdumps } }
and re-install grub2-common and grub-pc now I don't have any errors.dpkg -i --auto-deconfigure grub-legacy_0.97-67_amd64.deb
- Modify /etc/fstab to the new multipath devices (Code:/dev/mapper/system-part1 ext3 errors=remount-ro 0 0 /dev/mapper/system-part5 swap sw 0 0
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
UUID=294b7d72-4dea-4549-97c9-4320a55f3fb5 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
UUID=777b58c2-c780-41c3-919f-3640e4f5cce6 /srv ext3 defaults 0 2
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 51G 2.1G 47G 5% /
tmpfs 24G 0 24G 0% /lib/init/rw
udev 24G 284K 24G 1% /dev
tmpfs 24G 44M 24G 1% /dev/shm
/dev/mapper/pve-data 112G 823M 111G 1% /var/lib/vz
/dev/sda1 495M 94M 377M 20% /boot
/dev/sdc1 549G 467G 55G 90% /srv
/dev/fuse 30M 32K 30M 1% /etc/pve
Disk /dev/sda: 222.2 GB, 222189395968 bytes
255 heads, 63 sectors/track, 27012 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00096417
Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 27013 216457216 8e Linux LVM
Disk /dev/sdb: 1198.0 GB, 1197995228160 bytes
255 heads, 63 sectors/track, 145647 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x8aa2e50f
Device Boot Start End Blocks Id System
/dev/sdb1 1 145647 1169909496 83 Linux
Disk /dev/sdc: 598.0 GB, 597999034368 bytes
255 heads, 63 sectors/track, 72702 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf6f9fc2c
Device Boot Start End Blocks Id System
/dev/sdc1 1 72702 583978783+ 83 Linux
OK so my /etc/fstab currently looks like this
Code:# <file system> <mount point> <type> <options> <dump> <pass> /dev/pve/root / ext3 errors=remount-ro 0 1 /dev/pve/data /var/lib/vz ext3 defaults 0 1 UUID=294b7d72-4dea-4549-97c9-4320a55f3fb5 /boot ext3 defaults 0 1 /dev/pve/swap none swap sw 0 0 proc /proc proc defaults 0 0 UUID=777b58c2-c780-41c3-919f-3640e4f5cce6 /srv ext3 defaults 0 2
I am bit confused about what I should do with the boot mount entries etc, what should my /etc/fstab look like?
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pve/root / ext3 errors=remount-ro 0 1
/dev/pve/data /var/lib/vz ext3 defaults 0 1
#UUID=294b7d72-4dea-4549-97c9-4320a55f3fb5 /boot ext3 defaults 0 1
/dev/mapper/system-part1 /boot ext3 defaults 0 1
/dev/pve/swap none swap sw 0 0
proc /proc proc defaults 0 0
#UUID=777b58c2-c780-41c3-919f-3640e4f5cce6 /srv ext3 defaults 0 2
/dev/mapper/vzdumps-part1 /srv ext3 defaults 0 2