DRBD

Alessandro 123

Well-Known Member
May 22, 2016
653
24
58
42
Anyone using DRBD in production?
As per my other thread, i'm planning a new environment with HA and Live Migration.

Gluster or Ceph are too complicated and too slow in small clusters. We need at least 8 nodes to get decent performances.

DRBD is way faster and requires only 2 nodes.
I'm thinking about creating a DRBD Active/Passive cluster with volumes exported thourgh NFS.

Should I use NFS or iSCSI ?
Any suggestion to get 100% availability and absolutely no data-loss? Split brains are scaring me.
 
Keep in mind that i was talking about DRBD on 2 nodes.
PVE would be on 4-5 or 6 nodes (currently I have 8 XenServer nodes that I would like to move to PVE)
 
> I'm thinking about creating a DRBD Active/Passive cluster with volumes exported thourgh NFS.

In that case PVE is only concerned about the NFS server, what you do in the NAS side is transparent for the hypervisor.
 
I know but I'm just searching for some real usage information

Maybe some users are using drbd in production and are willing to share their experience
 
NFS on top of DRBD with a flooting IP is not a good idea since the NFS service run in kernel mode - file locks are acquired in kernel mode and exported to the client. So any operation which relies on persistent connections (VM's) to the NFS server will hang in case of fail-over since locks cannot be transferred between hosts. In your case I would opt for exposing the storage through iSCSI (backed by DRBD) and then implement floating IP using pacemaker or whatever fits your bill. https://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
 
NFS was never designed with HA in mind so this it is not advised to use NFS for HA. Commercial storage providers (netApp, Nexenta, EMC etc) has this feature but they have made an add-on to NFS to be able to do that. This is therefore something you pay for.