Help Setting up Shared Block Device using Fibre-Channel

riptide_wave

Member
Mar 21, 2013
73
2
8
Minnesota, USA
Hello,
After spending hours researching this, I finally give up. I have a NAS that directly connects to 4 Proxmox nodes using fibre-channel, and is sharing the same block device to all 4 nodes. Each Proxmox node sees the block device as /dev/sdc, but what I am trying to do is get "clustered" journaling working so I can use the storage for High Availability migrations.

I have tried OCFS2 as the file format, but the PVE Kernel has it disabled.
I tried Ceph, but it does not seem to do what I need (unless I am missing something?)
GlusterFS is the same story as Ceph, it seems to be able to "replicate", but doing that to the same "block" device would cause nothing but journal issues.
The only thing I was able to get to work at all was GFS2, but I was not very impressed with the speeds.

So, any hints/tips/tricks on getting a shared block device to work across 4 nodes for High Availability?

Also, because it seems to always be asked, here is my pveversion info:
Code:
root@C6100-1-N1:/boot# pveversion -vproxmox-ve-2.6.32: 3.2-126 (running kernel: 3.10.0-2-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-3.10.0-2-pve: 3.10.0-10
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1

Thanks
 
Using LVM on that device should do the job.

dietmar, I have been researching LVM, but I don't think it will accomplish what I am looking for. I want to share the same partition on a block device across all nodes, not create separate LVM partitions for each. Unless there is something here I am missing?

Thanks.
 
dietmar, I have been researching LVM, but I don't think it will accomplish what I am looking for. I want to share the same partition on a block device across all nodes, not create separate LVM partitions for each. Unless there is something here I am missing?

Thanks.
Hi,
if you want to use the disk with an filesystem like an nfs-share you need an cluster-filesystem like you wrote before. But why do you want this?
It's perhaps helpfull for openvz-containers, but AFAIK this is not supported yet - only local storage or nfs.

Normaly this kind of storage is used for lvm - an volumegroup which all nodes can access. On the volumegroup there are LVs (the vm-disks) which are only opened fron the node where the VM is running.
With this config you can use live migration for VMs (or restart the VMs on another node if one fail) and is the normal usecase for external storage.

Udo
 
Hi,
if you want to use the disk with an filesystem like an nfs-share you need an cluster-filesystem like you wrote before. But why do you want this?
It's perhaps helpfull for openvz-containers, but AFAIK this is not supported yet - only local storage or nfs.

Normaly this kind of storage is used for lvm - an volumegroup which all nodes can access. On the volumegroup there are LVs (the vm-disks) which are only opened fron the node where the VM is running.
With this config you can use live migration for VMs (or restart the VMs on another node if one fail) and is the normal usecase for external storage.

Udo

Hello Udo,

The reason I am looking for this is I was looking to move from my Gigabit NFS setup to a 4Gb Fibre Channel setup for better throughput and higher reliability (as I am doing a direct SAN, or DAS setup). The reason I would want a shared block device the way I do is so I don't have to re-create my VM's as LV's, if that makes sense. If I had a shared block device, I could just have it host my VM's similar to how my NFS shares work. The more I look into it though, I may be better off sticking with NFS; at least until a better solution comes out for DAS setups.
 
Hello Udo,

The reason I am looking for this is I was looking to move from my Gigabit NFS setup to a 4Gb Fibre Channel setup for better throughput and higher reliability (as I am doing a direct SAN, or DAS setup). The reason I would want a shared block device the way I do is so I don't have to re-create my VM's as LV's, if that makes sense. If I had a shared block device, I could just have it host my VM's similar to how my NFS shares work. The more I look into it though, I may be better off sticking with NFS; at least until a better solution comes out for DAS setups.
Hi,
you don't need to recreate your VMs - simply use storage migration (live). It's quite easy and work well.

Just try to move an VM to your lvm-san - I guess you will love the speed (depends on the IO of your san). And it's much easier than something with an cluster-FS...

Only not working for CT/Backup/Iso...

Udo
 
Hmm, thanks for pointing me in the right direction! I will mess with this and see if I can get it to work how I want.

As for my SAN, sadly it's just a Debian box with a Quad port Q-Logic card running SCST to passthrough block devices to the Fibre-Channel cards.

Also, will there ever be support for Backups and ISO's? Or maybe that's the one case where I should use GFS2?

Thanks again!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!