OCFS2 is a shared file system. So basically every node is seeing the same storage in the same way and they can share the same files without conflicting each other.
Yes, I know how Ceph works, but I was thinking that it would work on multipath block devices.
It will work if I would have local...
Ok, but now using OCFS2 as a local directory storage I can have snapshots if I use qcow2 images.
But I cannot have HA.
So I have to choose between snapshots and HA, right?
I cannot use CIFS or NFS because that FC storage is directly attached to Proxmox nodes.
Ok Ceph is not a solution, I...
Yes, /dev/mapper/STORAGE-DATA is the block device from multimapper, so I would be able to create a physical LVM volume on it.
Just one question: should I create an LVM or LVM-Thin volume?
Because LVM will not give me snapshot, but LVM-Thin yes.
So I don't need OCFS2, fine!
So could I create one LVM physical volume over the top of the OCFS2 volume directly into the /dev/mapper/STORAGE-DATA device?
So basically pvcreate /dev/mapper/STORAGE-DATA and so on?
What about GFS2? Do you consider it better than OCFS2?
I have 4 Proxmox servers, all connected to a Fiber Channel storage using multipath which is serving two volumes: STORAGE-DATA and STORAGE-DUMP.
Every node is showing 4 sd* devices (2 per volume) and one mapper device per volume:
root@node2:~# multipath -ll
I cannot understand how I can solve this problem: now I'm creating a new virtual machine on another server to solve the disk problem and I don't want to replicate this problem.
Could you send me some hints on what I have to do to avoid this on the new server?
I've just created a...
my virtual machine has two ZFS drives, one 60 Gb and one 18 Tb:
root@gus3:~# grep zfs /etc/pve/qemu-server/3001.conf
I have 4 10 Tb disks on this server and I'm running out of disk space, so I'm...