Hello,
After spending hours researching this, I finally give up. I have a NAS that directly connects to 4 Proxmox nodes using fibre-channel, and is sharing the same block device to all 4 nodes. Each Proxmox node sees the block device as /dev/sdc, but what I am trying to do is get "clustered" journaling working so I can use the storage for High Availability migrations.
I have tried OCFS2 as the file format, but the PVE Kernel has it disabled.
I tried Ceph, but it does not seem to do what I need (unless I am missing something?)
GlusterFS is the same story as Ceph, it seems to be able to "replicate", but doing that to the same "block" device would cause nothing but journal issues.
The only thing I was able to get to work at all was GFS2, but I was not very impressed with the speeds.
So, any hints/tips/tricks on getting a shared block device to work across 4 nodes for High Availability?
Also, because it seems to always be asked, here is my pveversion info:
Thanks
After spending hours researching this, I finally give up. I have a NAS that directly connects to 4 Proxmox nodes using fibre-channel, and is sharing the same block device to all 4 nodes. Each Proxmox node sees the block device as /dev/sdc, but what I am trying to do is get "clustered" journaling working so I can use the storage for High Availability migrations.
I have tried OCFS2 as the file format, but the PVE Kernel has it disabled.
I tried Ceph, but it does not seem to do what I need (unless I am missing something?)
GlusterFS is the same story as Ceph, it seems to be able to "replicate", but doing that to the same "block" device would cause nothing but journal issues.
The only thing I was able to get to work at all was GFS2, but I was not very impressed with the speeds.
So, any hints/tips/tricks on getting a shared block device to work across 4 nodes for High Availability?
Also, because it seems to always be asked, here is my pveversion info:
Code:
root@C6100-1-N1:/boot# pveversion -vproxmox-ve-2.6.32: 3.2-126 (running kernel: 3.10.0-2-pve)
pve-manager: 3.2-4 (running version: 3.2-4/e24a91c1)
pve-kernel-2.6.32-27-pve: 2.6.32-121
pve-kernel-2.6.32-28-pve: 2.6.32-124
pve-kernel-3.10.0-2-pve: 3.10.0-10
pve-kernel-2.6.32-29-pve: 2.6.32-126
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.5-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.5-1
pve-cluster: 3.0-12
qemu-server: 3.1-16
pve-firmware: 1.1-3
libpve-common-perl: 3.0-18
libpve-access-control: 3.0-11
libpve-storage-perl: 3.0-19
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-6
vzctl: 4.0-1pve5
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 1.7-8
ksm-control-daemon: 1.1-1
glusterfs-client: 3.4.2-1
Thanks