Create a cluster with only two computers ¿and the storage?

lince

Member
Apr 10, 2015
78
3
8
Hello there,

At the moment I have a single computer with proxmox. Now I would like to create my first cluster but I only have two computers available.

I believe I need a shared storage in order to create a cluster, so how can I create a shared storage between the two nodes if I don't have a third computer or a SAN/NAS ? Is it possible ?


Thanks.
 
Thanks for your reply,

My current server has 3.4 but I was actually thinking about updating to v4 to create the cluster, so if possible, I would prefer an alternative that I can build with v4.

Do you think DRBD is not supported in v4 ?

I read something about Ceph, could work with v4 ?

Thanks :)
 
I think your mixing some stuff here.

1. High Availability
In Proxmox 3.4 you did do a 2-Node HA-Cluster with a Shared Storage as a Quorum Disk. As far as i know that Option (Quorum Disk) is no longer useable in proxmox4. You afaik need a minimum of 3 Nodes.Else no HA.

2. Normal Cluster (no HA)
You can still do a non-HA Cluster. 1, 2 or X-Nodes. The Difference is afaik only that you do not have HA



You can do shared storage in both Cases, but for HA it is mandatory.
check https://pve.proxmox.com/wiki/Storage_Model for the types possible.

I personally have experience with Ceph and NFS (openmediavault in VM) on top of ceph.
 
Thanks Q-wulf,

I have been reading about the different storage models and now I have my ideas a little bit more clear :)

What I want to build is a cluster but mainly for learning purposes, because I don't really have a use for it. Right now I have only 16 virtual machines created and I usually don't have more than 4-5 switched on so with the node I have at the moment I have enough.

I want to build the cluster with the desktop computer (where I have proxmox at the moment) and a spare laptop. And my requirements for the cluster are as follows:

- I want to be able to remove the laptop from the cluster and keep my VMs working (in case some day I need the laptop for something)
- I want to be able to perform live migration
- I don't really need HA altought it would be nice to be able to activate it to see how it works but after that I can turn it off

After reading the links you provided and clearing my mind a bit, I think the best option for me is to create the storage in the desktop and use the laptop as a client. This way I can remove the laptop whenever I need to and keep working with the desktop.

Correct me if I'm wrong but for this, I believe the best option is to use NFS or glusterFS. I read that Ceph and DRBD are used with clustering so in order to set them up, I would need to replicate the storage in both nodes and if I want to remove the laptop every now and them is probably not a good idea to use them.

What do you think is better, NFS or glusterFS ? I think I may choose NFS because is simpler and I can't see any advantage on using glusterFS.

Will I be able to enable HA with this setup ?

Thanks :)
 
Last edited:
Hello Q-wulf,

I was playing a bit with NFS today and indeed is easy to use. It's so simple, that I think in the end I'm gonna create three partitions. I will use one with NFS to get the system running fast and easy and I'm gonna use the other two to test glusterFS and ceph (it's true that it can be installed in one node).

That way I will first learn how to set up the cluster and everything else and then I have the option to play a bit with different storage models. I'll keep in mind the thing about the votes when I create the cluster.

Next step, make some space in the hdd and create the three partitions without loosing any data.

Thanks a lot for your help ;)
 
Yep, when I was writing it I was actually wondering if that would be possible.

I was thinking about installing nfs, glusterfs and ceph in the same disk in three different partitions.

I guess nfs and gluster would support it but not so sure about ceph. I would need another hdd for ceph right ?
 
Ceph strongly discourages the use of "partitions" for OSD's. They advise to use a Disk (which ceph then partitions into journal and OSD space). Journals can be offloaded to other devices (such as SSD, M.2 Disks, FusionIO cards, etc).

The hole reason to use Ceph is to have no single points of failure and horizontally + vertical scalability. If you e.g. Create pool over your osd's that has Replication 3/1 (3 Copies of the file min_Size = 1 copy) you will need a minimum of 3 OSD's.
Now if you wanna e.g. use Erasure-Coded pools (which in my POV is the single best reason to use Ceph), then you need k+m OSD's (where K is the number of chunks Data will be split into and m is the number of Parity [as in the # of drives you can loose without data loss]). Lets assume you want m=3, lets assume you want 3 Chunks, then you need k=3. that gives you a overhead of 100% (m/3) to allow for 3 Drives to fail Nothing special here. Now lets assume you want to withstand 5 Drives to fail and to split your data in 30 chunks (m=5/k=30) and you end up with an overhead of 16%.

Why am i mentioning all this ? because you need at least 2 HDD's to "do" anything with Ceph.If you wanna do something advanced (like e.g. a EC-Pool), your looking at a minimum of 2 HDD's (K=1, M=1) but then you basically get the same stuff as as Replicated pool (2/1). Really useful it starts getting with more drives, the more the better and some SSD's for Cache-Tier's. And custom Crush-maps and Failure Domains, and custom hook scripts (for parsing Location of a node in a cluster based on its hostname) and all that goodness, which will only really work when you have a minimum of 3 Nodes.

MY Suggestion: In your case (learning ceph) you'd probably be better of testing Ceph in a VM (or a couple of VM's) and use multiple Virtual Disks on Virtio. Nested Proxmox ? And while your at it, get some openvswitch-knowledge under your belt :P
 
Last edited:
Fair enough. I will follow your advice to build ceph in a virtual env. For now I'll concentrate on the cluster and I'll leave some space for glusterFS. I couldn't find much info about speed difference between nfs and gluster so I would really like to test it.

I gotta say I like your suggestion about openvswitch. Do you know any good documentation with a laboratory or something like it to try different situations ? My lab is quite simple and it doesn't require any advanced network configuration.

Thanks :)
 
Fair enough. I will follow your advice to build ceph in a virtual env. For now I'll concentrate on the cluster and I'll leave some space for glusterFS. I couldn't find much info about speed difference between nfs and gluster so I would really like to test it.[...]
have a read here as an entry:
https://nileshgr.com/2014/07/18/failed-experiment-glusterfs
The reason you you use GLusterFS over NFS is that you have "easy" failover, and a speed advantage when using multiple servers (reading in parallel)



I gotta say I like your suggestion about openvswitch. Do you know any good documentation with a laboratory or something like it to try different situations ? My lab is quite simple and it doesn't require any advanced network configuration.[...]


https://pve.proxmox.com/wiki/Open_vSwitch
Thats a pretty good starting point.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!