Distributed filesystem for HA cluster and SAS storage

Hello Alex

I attached the boot response from the screen and it seems something similiar to your output posted before
So It does start booting proxmox but suddenly it stops (see attach )

What could I do next ?

Thank you very much in advance and
BR

Tonci
 

Attachments

  • proxmox-sas.JPG
    proxmox-sas.JPG
    151 KB · Views: 20
I'm not sure to have correctly understand, but, have you SAS multipathing ? If yes, have you install multipath-tools package ?
Have you try to boot the server with only one sas connection ?

Maybe, you can try to upgrade the kernel:

add this in you sources.list:
deb http://download.proxmox.com/debian wheezy pve-no-subscription

and install the latest: pve-kernel-3.10.0-5-pve


Alex
 
Hello Alex
thank you I got one step forward
I put aside connection to second san-controller and only connected both hosts to one the first one -> no storage redundancy.
On v3700 I created volume (sas 1TB) , defined both hosts and mapped sas links as well
As the result I see this device as /dev/sdb1 on both hosts:

Disk /dev/sdb: 1099.5 GB, 1099511627776 bytes
173 heads, 2 sectors/track, 6206600 cylinders, total 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002e1fd


Device Boot Start End Blocks Id System
/dev/sdb1 2048 2147483647 1073740800 8e Linux LVM


but when I want to create new VG in order to configure shared LVM I got this:

oot@pve001:~# vgcreate pve2 /dev/sdb1
No physical volume label read from /dev/sdb1
Can't open /dev/sdb1 exclusively. Mounted filesystem?
Unable to add physical volume '/dev/sdb1' to volume group 'pve2'.


I got stuck here and cannot move on ...

How can I proceed to get shared lvm storage on my cluster ?

tnx in advance and

BR

Tonci
 
Hi Alex
the situation is kind'a devolping .. :-)

So after this error I figured that the problem was in sas partition reference :

root@pve001:~# vgcreate pve2 /dev/sdb1
No physical volume label read from /dev/sdb1
Can't open /dev/sdb1 exclusively. Mounted filesystem?

Unable to add physical volume '/dev/sdb1' to volume group 'pve2'.

Instead of /dev/sdb1 it must be :

/dev/mapper/36005076300808672d800000000000000-part1


because blkid says:

root@pve001:~# blkid
/dev/sda2: UUID="612f0ae5-4cf2-4fa8-96b2-633d7c339700" TYPE="ext3"
/dev/sda3: UUID="VMQScD-hDDp-bJgF-DYeK-hPoA-mn28-RyGQAm" TYPE="LVM2_member"
/dev/mapper/pve-root: UUID="b90f763f-eea6-4738-835c-ea25c198d86b" TYPE="ext3"
/dev/mapper/pve-swap: UUID="d624e6ac-f94b-4549-85c4-1e31ce9b465b" TYPE="swap"
/dev/mapper/pve-data: UUID="c55982ce-e43c-417c-8932-f95db60dfd56" TYPE="ext3"
/dev/mapper/3600605b00805b0301bec2bbf4363c211-part2: UUID="612f0ae5-4cf2-4fa8-96b2-633d7c339700" TYPE="ext3"
/dev/mapper/3600605b00805b0301bec2bbf4363c211-part3: UUID="VMQScD-hDDp-bJgF-DYeK-hPoA-mn28-RyGQAm" TYPE="LVM2_member"
/dev/sdb1: UUID="x3N7Bm-FnWT-ia5D-T9om-UD1t-3ROt-AZWgSr" TYPE="LVM2_member"
/dev/mapper/36005076300808672d800000000000000-part1: UUID="x3N7Bm-FnWT-ia5D-T9om-UD1t-3ROt-AZWgSr" TYPE="LVM2_member"
root@pve001:~#

since this successful execution LVM shared storage works as expected

So the next step would be to connect second canister (controller) to the hosts and this means that multipath config should be created ?

Tomorrow I'll be on-site and connect the rest of the cables :-)
and report back the result

till then

best regards

Tonci
 
Hello Tonci,

I think you have to make working your mutlipath first ... because after that, you will have to create the filesystem on a virtualdisk create by the mutlipath.
1) Install the mutlipath package;
2) Plug the two SAS cables;
3) run this commande: multipath -ll
Normally, you will see the different path for accessing your storwize. So configure correctly the /etc/multipath.conf ... more less like that (wwid is the id show by the multipath -ll ):
multipaths {
multipath {
wwid 36005076300808616d000000000000000
alias vdisk
path_grouping_policy group_by_prio
path_selector "round-robin 0"
failback immediate
#rr_weight priorities
rr_weight uniform
no_path_retry 5
rr_min_io 100
}
}

4) Restart the service multipath
5) create a physical volume on the vdisk: pvcreate /dev/mapper/vdisk
6) create a volume group: vgcreate VG-DATAVM /dev/mapper/vdisk
7) go to the proxmox admin panel, in the cluster tab storage and create a shared LVM on this volume group.

Maybe it's necessary to reboot the different node for getting the volume groupe available on each node...

Alex
 
Hi Alex

thanks ... so you think I will have to destroy actual volume and create another one through multipath ?

BR
Tonci
 
Tonci,

I think, yes, you have,because, for me, the Volume Group have to be created on the physical volume which have to be created on the vdisk through the multipath.
Maybe there is another way.

Alex
 
Hello Alex

everything works out of the box :-)
after connecting two other cables , and mapped correctly restarting did the whole job. LVM volume remained the same and nothing changed.

root@pve01:~# lvscan
ACTIVE '/dev/pve/swap' [31.00 GiB] inherit
ACTIVE '/dev/pve/root' [69.50 GiB] inherit
ACTIVE '/dev/pve/data' [161.46 GiB] inherit
ACTIVE '/dev/pve2/vm-100-disk-1' [72.00 GiB] inherit
ACTIVE '/dev/pve2/vm-100-disk-2' [450.00 GiB] inherit
root@pve01:~#

root@pve02:~# lvscan
ACTIVE '/dev/pve/swap' [31.00 GiB] inherit
ACTIVE '/dev/pve/root' [69.50 GiB] inherit
ACTIVE '/dev/pve/data' [161.46 GiB] inherit
inactive '/dev/pve2/vm-100-disk-1' [72.00 GiB] inherit
inactive '/dev/pve2/vm-100-disk-2' [450.00 GiB] inherit
root@pve02:~#

root@pve02:~# multipath -ll
36005076300808672d800000000000001 dm-1 IBM,2145
size=1.2T features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 7:0:1:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 7:0:0:0 sdb 8:16 active ready running
3600605b00805c7501bee87290d6b4896 dm-0 IBM,ServeRAID M1115
size=278G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 6:2:0:0 sda 8:0 active ready running
root@pve02:~#

root@pve01:~# multipath -ll
36005076300808672d800000000000001 dm-1 IBM,2145
size=1.2T features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 2:0:1:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 2:0:0:0 sdb 8:16 active ready running
3600605b00805b0301bec2bbf4363c211 dm-0 IBM,ServeRAID M1115
size=278G features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
`- 0:2:0:0 sda 8:0 active ready running
root@pve01:~#


there was no need for the multipath.conf because denial has already built-in multipath definition (aka hardcoded) exactly for this storage :-)

After that I reinstalled both hosts, installed multipath-tools , left lvm volume on storage as it is ( with those VM-disks) , mapped sas adapters to the storage , and after one reboot I defined LVM storage in proxmox cluster and two vm-disks on the storage automatically appeared in the disk list ... I had to adjust 100.conf vm file change disk references , one qm rescan and that was it !!!

The only extra thing was installing multi path tools without any configuration and adjustment at all

So real out-of-the-box concept !!!

Thank you for your help and support

BR

Tonci