Hi,
I have 4 Proxmox servers, all connected to a Fiber Channel storage using multipath which is serving two volumes: STORAGE-DATA and STORAGE-DUMP.
Every node is showing 4 sd* devices (2 per volume) and one mapper device per volume:
Actually the volumes are formatted with OCFS2 (Oracle Cluster File System) because as every node is seeing the same storage this can be distributed over the network to avoid conflicts.
The problem is that I don't know OCFS2 (this configuration was made by another IT team) and the volumes are mounted on Proxmox as directory, so I don't have snapshots, migration, hight availability, nothing.
I'm wondering if I could switch to Ceph to have a block device which supports snapshots, or if you have any other ideas about how to make things better here.
Thank you very much for your help!
Bye
I have 4 Proxmox servers, all connected to a Fiber Channel storage using multipath which is serving two volumes: STORAGE-DATA and STORAGE-DUMP.
Every node is showing 4 sd* devices (2 per volume) and one mapper device per volume:
Code:
root@node2:~# multipath -ll
STORAGE-DATA (36b4432610018cd305a2f574700000013) dm-5 HUAWEI,XSG1
size=8.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 1:0:1:1 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 8:0:0:1 sdd 8:48 active ghost running
STORAGE-DUMP (36b4432610018cd305a2f64b900000014) dm-6 HUAWEI,XSG1
size=4.0T features='0' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 8:0:0:2 sde 8:64 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 1:0:1:2 sdc 8:32 active ghost running
root@node2:~# mount|grep STORAGE
/dev/mapper/STORAGE-DATA-part1 on /STORAGE-DATA type ocfs2 (rw,relatime,_netdev,heartbeat=local,nointr,data=ordered,errors=remount-ro,atime_quantum=60,coherency=full,user_xattr,acl,_netdev)
/dev/mapper/STORAGE-DUMP-part1 on /STORAGE-DUMP type ocfs2 (rw,relatime,_netdev,heartbeat=local,nointr,data=ordered,errors=remount-ro,atime_quantum=60,coherency=full,user_xattr,acl,_netdev)
Actually the volumes are formatted with OCFS2 (Oracle Cluster File System) because as every node is seeing the same storage this can be distributed over the network to avoid conflicts.
The problem is that I don't know OCFS2 (this configuration was made by another IT team) and the volumes are mounted on Proxmox as directory, so I don't have snapshots, migration, hight availability, nothing.
I'm wondering if I could switch to Ceph to have a block device which supports snapshots, or if you have any other ideas about how to make things better here.
Thank you very much for your help!
Bye