Recommendations for a 3 server setup including storage

bjornoss

New Member
Oct 8, 2015
7
0
1
I need some recommendations regarding a new setup.

I have 3 servers/PCs:
Server 1Server 2Server 3
CPUi7 2600KXeon E5310i5 750
RAM16 GB8 GB16 GB
DISK4 x 2 TB
128 GB SSD
2 x 2 TB
1 x 1 TB
8 x 2 TB
256 GB SSD
NETWORK2 x Gbps2 x Gbps1 x Gbps

Before the just released Proxmox version 4 I wanted to set up a two host HA-cluster using server 1 and 2 and use server 3 as a shared storage server using OpenMediaVault.
But in version 4 HA cluster requires at least 3 servers and shared storage.

I want as high availability as possible as one of the VMs will be running pfsense.

How about setting up a cluster with server 1 and 2 but not enabling HA.
If server 1 crashes, is it possible to log on to server 2 and just restart the VMs manually?

Any other suggestions?
I am open to do small hardware changes if it makes the setup better or simpler.

What file systems should I use on the servers and the shared storage?
I wanted to try ZFS on Proxmox 4 but I have some problems getting it to work (forum post).

Thanks
 
How about setting up a cluster with server 1 and 2 but not enabling HA.
If server 1 crashes, is it possible to log on to server 2 and just restart the VMs manually?

That works perfectly (you just need to set expected vote to 1 after crash). The requirement for 3
nodes is only if you want to use HA.
 
Thanks for the quick reply.
Then I'll go for that solution.

How about file system?
ZFS raid1 or zfs raid10 on proxmox?

Reading on Storage Model i the wiki, is it best to use "ZFS over iSCSI" or "LVM Groups with Network Backing" in my case?

Thanks again.
 
For this Hardware I would recommend a different setup (I have this in my lab).

Build a cluster from all three servers, use HA or not. With three nodes you are able to
setup a Ceph cluster which gives you storage redundancy.
You may set redundancy to two-fold (instead of three which is default),
and you may balance load (perhaps not using i7-950 for VMs).

Consider using SSD as osd journal, it works good using one SSD for proxmox and journal.

I have no experience (yet) with 4.0 HA in this environment, I will upgrade my two clusters
(both with three servers) to a six node HA cluster in the next weeks.

Cheers,
Birger
 
Thanks for your input as well Birger.
I will read up on Ceph and see if that cloud be an alternative.
/Bjørn Håkon
 
Really? Has changed in 4.0? I'm very interested in it, since AFAIR with Proxmox 3.4 cluster (and NO HA), if a node crashes in addition to set the expected vote to 1 you still had to manually move in the filesystem the config of the vms that were running in the dead node, i.e. in the node alive 'prox01' issue something like
Code:
   nodedown='prox02' ; mv /etc/pve/nodes/$nodedown/qemu-server/*.conf /etc/pve/nodes/$(hostname)/qemu-server/
I'm really looking for a GUI only solution for the situation where an human has control of the physical state of the servers (can be sure the node 2 is dead and remove power cord from it) and act in the web GUI to restart the nodes in the survived one, waiting for the hardware assistence to come.
thanks a lot
 
if a node crashes in addition to set the expected vote to 1 you still had to manually move in the filesystem the config of the vms that were running in the dead node

Yes, that is still required. But IMHO not a big deal.
 
For the average Joe, at least in my country, it is. One idea would be to provide GUI with some possibility of personalization for each node, like buttons in a "custo" tab where their pressing can be associated to a confirmation message and a shell script execution. That way I could set the message "This will start the VM of node2 into node1, you must be sure node2 is powered off. Proceed?" and then the script to set the expected vote and the file config move :) Of course this activity should be logged somewhere so we can have the proof that, if the disaster happens, was due to node2 not powered off and this functionality used nevertheless.
Thanks a lot
 
I need some recommendations regarding a new setup.

I have 3 servers/PCs:
Server 1Server 2Server 3
CPUi7 2600KXeon E5310i5 750
RAM16 GB8 GB16 GB
DISK4 x 2 TB
128 GB SSD
2 x 2 TB
1 x 1 TB
8 x 2 TB
256 GB SSD
NETWORK2 x Gbps2 x Gbps1 x Gbps

Before the just released Proxmox version 4 I wanted to set up a two host HA-cluster using server 1 and 2 and use server 3 as a shared storage server using OpenMediaVault.
But in version 4 HA cluster requires at least 3 servers and shared storage.

I want as high availability as possible as one of the VMs will be running pfsense.
[...]


I would do a Ceph Cluster and build a custom Crushmap, so you can have crush take care of Failure domains for you.
If you later on get the need for more IOPs and install more SSD's, you can even modify the crush map to allow for "shared storage" (Pools) to be split into HDD and separate SSD, HDD with SSD Cache, Jerasure Coded pool (which basically allows for more OSD's to fail, but using way less space then normal HDD-replication, with a standard HDD Cache and a SSD cache ontop of that. Or even better yet, a combination of all of it.

In short it allows you alot of flexibility in terms of finding the best compromise of speed, iops, redundancy and Spaceusage.



I replicated this on a 4-Node Cluster on Virtual Box utilizing a single SSD (8 OSD's ) and a single HDD (20 OSD's), then added 2 more physical machines to it (2 SSD + 5 HDD each) and removed 2 Virtual ones. Works pretty well and reliable.


Some advice tho:

  • Think about maybe sticking a 2x 2GBps network card into your server 3. (rule of thumb i figure is MaximalReadWriteSpeedOfDisk * Number of Disks + your expected VM traffic) I personally found that the 30 Euro for additional 2x 1 GBps are well worth it, compared to the massive cost rise for 10 GBps).

  • Use openvswitch, to bond your physical nics together. 2x1GBps, 2x1GBps, 3x1GBps., then create vlans ontop of that for Proxmox, ceph and public traffic - see wiki

  • I used 32 GB USB3 sticks for the OS (10€ a piece), so i don't have to waste a HDD/SSD
  • I set up 2 Free-Nas Vm's, one using a Jerasure coded pool, added to the proxmox cluster so i can back up my VM's to it. one for using a free-nas for client stuff, like Time-Machine, etc..
  • I then also added a pfsense VM box via KVM and added a Wlan-Card via PCIE-passthrough to it. (3 virtual NICS total)
  • I have 32 GB Ram in each Physical host, never seem to use more then 2-3 GB for the Proxmox+Ceph nodes tho.


Side-note:

  • I'm getting myself a couple more SSD's so i can have a faster and higher capacity SSD-Only Pool / Cache pool (not using dedicated journals and wasting a SSD for HDD-Journals), probably going for a bunch of 60 GB(30 €) ones, as i do not have the funds to buy the same amount of Disk as 256GB or 480 GB variants, altho they'd give me a better Euro per GB value.
  • If you later run into network congestion, just stick another 2x2GBps Card into the offending machine and bond em to your openvswitch-bond.
  • still toying with adding a GPU to one of the physical nodes and attaching it to a windows KVM Machine via pcie-passthrough to use as a client (eliminate the need for one client i almost never use)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!