better option for storage

rickygm

Renowned Member
Sep 16, 2015
136
6
83
Hi forum , I write to you seeking advice and what is the best option to have high availability , I have two servers with san connected via FC and soon I will have a third server.

Now I need to have this shared storage between cluster nodes, but reading a bit of information as close to look like a solution is OCFS2, although the kernel Proxmox I see that does not work. I have seen DRBD but not what I want, the idea is that all nodes to share storage without data corruption.

I think in this cluster running a 15 vm
My San is Hp 6300 with 8TB

you advise me?
 
Hi Rickygm,

I haven't seen an OCFS2 setup in years, so maybe I'm wrong, but I think it is dead. Maybe a Proxmox developer can shed light on this topic.

If you already have a shared storage with an FC-based SAN, you can use LVM. This works very well and is well documented on the wiki:

https://pve.proxmox.com/wiki/Storage_Model

You only need DRBD if you have not a shared storage (no SAN), because DRBD creates a shared storage device via Network.

Best,
LnxBil
 
Hi Rickygm,

I haven't seen an OCFS2 setup in years, so maybe I'm wrong, but I think it is dead. Maybe a Proxmox developer can shed light on this topic.

If you already have a shared storage with an FC-based SAN, you can use LVM. This works very well and is well documented on the wiki:

https://pve.proxmox.com/wiki/Storage_Model

You only need DRBD if you have not a shared storage (no SAN), because DRBD creates a shared storage device via Network.

Best,
LnxBil

I think ocfs2 still a good option, I worked with LVM and Promox but as local disks, My concern is when a node goes down.
 
I think ocfs2 still a good option,...

There is no up-to-date documentation on this and also no evidence on the wiki since 1.9. I think this is also not supported, isn't it?


My concern is when a node goes down.

Why is this a problem? I only encountered problems if I connect the main network in a crossover fashion, but that is normally never an option for Oracle. You always have to use a switch. The MII status should never be down.
 
There is no up-to-date documentation on this and also no evidence on the wiki since 1.9. I think this is also not supported, isn't it?



Why is this a problem? I only encountered problems if I connect the main network in a crossover fashion, but that is normally never an option for Oracle. You always have to use a switch. The MII status should never be down.

Thnk LnxBil for you help!

Yes, I think OCFS2 it is not supported now.

I think I not understand, if I use lvm volume and amount, it has to be associated with such a server node 1, if node 1 fails, all the vm fall.

How would you do it?
 
Thnk LnxBil for you help!

Yes, I think OCFS2 it is not supported now.

I think I not understand, if I use lvm volume and amount, it has to be associated with such a server node 1, if node 1 fails, all the vm fall.

How would you do it?

Present a lun from your shared storage to each host, then put LVM on top. Add the LVM group to each physical host in the cluster. If node 1 fails, all of them VM's will start on the remaining nodes. You don't need anything else if you already have a shared/central storage.
 
I could never get gfs2 to work without crashes, but that was back in Proxmox 3.4. It is a real network filesystem with concurrent access to all files (on a file level) that rely on a shared storage, but it is also NOT supported in Proxmox. Only glusterfs, which is not a shared storage based cluster filesystem.

Unfortunately, LVM was the only solution for my FC-based SAN (block level in contrast to gfs2). There is no automatic snapshotting from the GUI (also 3.4, maybe changed), but it works as any cluster solution in the Linux world (e.g. XEN). LVM is rock solid. Has been for at least 6 years for us and we still count on it.
 
Do you actually need a cluster aware filesystem or are you simply looking to have a HA cluster where the vm's can run on any node in the cluster?
 
I want is to shut down a node, the VM automatically start on another node.

This can be handled very handily with LVM. You CAN use glusterfs with 2 nodes; it doesnt matter that you only have 1 storage device- it isn't limited to 1 lun... unless you want (and expect) to end up with an active/passive cluster, you would want a minimum of two luns so each of your nodes can be active on one of them at any given time. With 2 luns you can use gluster, or just two lvm mountpoints. Given your particular use case the latter is probably the better approach.
 
Here is what I do and I believe its what most others do. I don't see any reason to present 2 luns.

I only present 1 lun from the storage to each host in the cluster. I also use multipath, but I will leave that out of the picture.

I then put LVM on top of the lun.
- pvcreate /dev/sdx
- vgcreate nameofvg /dev/sdx

Once that is done, you can hit the GUI of one of the cluster nodes and add the LVM storage. Create a VM, add it to HA and if you have all your ducks in a row everything should work.

It would be best to start with these.
https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster
https://pve.proxmox.com/wiki/High_Availability_Cluster_4.x
 
I also presented more LUNs and created various lvm volume groups (e.g. SLOW and FAST), yet the simplest solution is to have one LUN as adamb suggested and extend the volume group if more space is needed.
 
Hi , according to his advice, do the following, made 4 logical volumes and mount them in:

/var/lib/vz/lv_1_lvm_DFAST1
/var/lib/vz/lv_2_lvm_DFAST2
/var/lib/vz/lv_1_lvm_DLOW1
/var/lib/vz/lv_2_lvm_DLOW2


PV /dev/mapper/DLOW2 VG lvm_DLOW2 lvm2 [4.29 TiB / 93.20 GiB free]
PV /dev/mapper/DLOW1 VG lvm_DLOW1 lvm2 [4.00 TiB / 102.39 GiB free]

PV /dev/mapper/DFAST2 VG lvm_DFAST2 lvm2 [730.00 GiB / 17.11 GiB free]
PV /dev/mapper/DFAST1 VG lvm_DFAST1 lvm2 [730.00 GiB / 17.11 GiB free]

edit the fstab and add routes to automatically mount volumes.

I have a question, on the Proxmox web interface to add these volumes like LVM, I have to do it on both nodes or just one and get the option of shared?
 
I only present 1 lun from the storage to each host in the cluster. I also use multipath, but I will leave that out of the picture.
I just want to point out that if you have multiple pipes (which I take as a given, whats the point of multipathing without them?) any one lun can only use one path at a given time, leaving the other(s) idle. For an active/active config you want a minimum of 1 lun per host channel on your storage.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!