create 3 nodes cluster with exact copies of vm on all 3

amachils

New Member
Mar 11, 2016
15
0
1
I created a 3 node cluster with ProxMox, but I can't seem to create a vm that is available on all three, so if one node fails, an other copy is started on one of the other nodes. Is this even possible? I created a ZFS storage, choose this to be available on all three nodes and it's active on all. Created a new vm with the disk on this storage, so I expected to see this to appear on all three, but nothing. What am I missing. I've read several 'how-to's' and watched several clips on YT, but most of these are for older versions of ProxMox. I'm running the latest version 4.1 version.....

Any help/hints/tips are appreciated.

Angelo
 
You need a shared storage (e.g. Ceph, iSCSI, ..) for the VM disks for this, not a local one.
 
Thank you for your quick reply....... But then this storage server would be a single point of failure or would I need two??? How to sync these two then or would Proxmox be able to sync them? Any up to date how-to's for this?
 
Ceph for example is a distributed shared storage system, check the official ceph documentation, the Proxmox Wiki and numerous posts in this forum for details on how to setup and when to use or not use it. It does probably offer what you need, but it also requires powerful enough hardware to deliver good performance. You can run a Ceph cluster on its own (on different physical machines than your PVE nodes) or on your PVE cluster, both variants have their own pros and cons.
 
Can I run CEPH on the servers themselves or do I need seperate storage servers? Also, I see that 10Gb is mandatory with CEPH?! If that is the case, that's not going to happen, one of the nodes is on the other side of the country and the most I can get is a dedicated 1Gb backbone.

I have Gluster running on the ProxMox servers themselves, but would DRBD be a better choice?

[edit] scrap Gluster I guess, when I try to install an OS on the kvm vm which has it's disk on a glusterfs I'm getting a lot of errors related to the vda....
 
Last edited:
DRBD as well as Ceph is a distributed, shared storage, so the data is written in sync on all participating storage nodes. This is essential for a (consistent) failover system.

You want to have a rather complicated setup. Is there a VPN or dedicated, protected line between the servers? These "long distance" (I assume you do not live in a small country, were other side means only a few km) communication is very, very bad for performance, because of the synchronous write on all nodes. this cannot be done over long distances without a bargain.
 
DRBD as well as Ceph is a distributed, shared storage, so the data is written in sync on all participating storage nodes. This is essential for a (consistent) failover system.

You want to have a rather complicated setup. Is there a VPN or dedicated, protected line between the servers? These "long distance" (I assume you do not live in a small country, were other side means only a few km) communication is very, very bad for performance, because of the synchronous write on all nodes. this cannot be done over long distances without a bargain.

Well, my country is not soooo big (Netherlands), the distance between the location of the future node 1 and 2 and the location of the future node 3 is about 200km.
Yes, there is a dedicated line between the two locations.
 
Last edited: