Perfect setup for active/active homelab

pepperonime

Active Member
Dec 7, 2017
3
1
41
23
Hi,

I'm currently setuping a Proxmox LAB at home and I'd like to have some guidances/recommendations.

Here is my hardware :
2 HP Proliant Microserver Gen 8 :
- 16 GB of RAM
- 220 SSD
- 2 Gigabit interfaces on each server
- MicroSD port for Proxmox installation

I would like to have an active/active cluster and I'm trying to find some solutions. For the moment I've thought to differents setups :
- 1 220G partition shared with DRBD Primary/Primary with LVM on top
- 2 110G partitions shared with DRBD Primary/Secondary with LVM on top
- 1 partition shared with DRBD Primary/Secondary for each container/VM hosted in Proxmox (less flexible but each instances is independant)
- 1 220G partition on each host with Proxmox replication (simple setup but bigger RPO in case of disk crash)


I also have a question on the network side.
Currently I'm using one gigabit interfaces for network access and one for DRBD replication (with a cable directly plugged between each interfaces and jumbo frames).
I wonder if a mode 6 bonding would not give the same (or slightly less) performances but with the benefits of link resiliency ?

Many thanks for your help.
 
Here is my hardware :
2 HP Proliant Microserver Gen 8 :
- 16 GB of RAM
- 220 SSD
- 2 Gigabit interfaces on each server
- MicroSD port for Proxmox installation

I would like to have an active/active cluster and I'm trying to find some solutions. For the moment I've thought to differents setups :
- 1 220G partition shared with DRBD Primary/Primary with LVM on top
- 2 110G partitions shared with DRBD Primary/Secondary with LVM on top
- 1 partition shared with DRBD Primary/Secondary for each container/VM hosted in Proxmox (less flexible but each instances is independant)
- 1 220G partition on each host with Proxmox replication (simple setup but bigger RPO in case of disk crash)

DRBD is not supoorted by Proxmox any more. Use ZFS with replication.

I also have a question on the network side.
Currently I'm using one gigabit interfaces for network access and one for DRBD replication (with a cable directly plugged between each interfaces and jumbo frames).
I wonder if a mode 6 bonding would not give the same (or slightly less) performances but with the benefits of link resiliency ?

Not quite clear how many NICs you have in total per server. It's recommended to have application and cluster traffic separated. I you have 3 in total to use 1 for application and the two others bonded for cluster communication.
 
Hi Richard, thanks for your answer.

DRBD is not supoorted by Proxmox any more. Use ZFS with replication.
I guess it's related to (not so) recent changes in DRBD policies about their tools (and DRBD9 ?) ?

Ok, so one zpool on each SSD with a scheduled replication ? What a pity to loose synchronous replication :( .


Not quite clear how many NICs you have in total per server. It's recommended to have application and cluster traffic separated. I you have 3 in total to use 1 for application and the two others bonded for cluster communication.

I have 2 NICs on each server. I've configured them in bonding mode 6.


Thanks again for your help, I appreciate.
 
  • Like
Reactions: pati
I have a 2 node cluster (3 if you class the VM I run to break the tie in the case of one going bad) as a lab and a load of VM's. Essentially PROD in my case as I have Plex, and a few containers for Wiki/Ansible.

I use the "cluster" address on the same range as my backend app traffic. At the time as I run my pfSense box on the same host, if I buggered up my host then I wouldn't be able to route from my client VLAN to my cluster VLAN. If theres a better way then I'd love to hear a solution.

If I was you, LACP (or whatever you want) your 2 NICs together and run your MGMT VLAN as your cluster network. I have a VM backend VLAN (trunked to the switch for the other host in the cluster) and Client VLAN also.

I am however using a mix of LVM/LVM-Thin and a little bit of NFS/QCOW. I'm actually running a VM that hosts the NFS service so that I have an easy migrate path if I decide to move my storage to a dedicated NFS box.
 
I have a 2 node cluster (3 if you class the VM I run to break the tie in the case of one going bad) as a lab and a load of VM's. Essentially PROD in my case as I have Plex, and a few containers for Wiki/Ansible.

I use the "cluster" address on the same range as my backend app traffic. At the time as I run my pfSense box on the same host, if I buggered up my host then I wouldn't be able to route from my client VLAN to my cluster VLAN. If theres a better way then I'd love to hear a solution.

If I was you, LACP (or whatever you want) your 2 NICs together and run your MGMT VLAN as your cluster network. I have a VM backend VLAN (trunked to the switch for the other host in the cluster) and Client VLAN also.

I am however using a mix of LVM/LVM-Thin and a little bit of NFS/QCOW. I'm actually running a VM that hosts the NFS service so that I have an easy migrate path if I decide to move my storage to a dedicated NFS box.

Hi,

As I don't have a 802.3x compatible switch I can't configure LACP.
That's why I'm fallbacking on bonding mode 6 which seems to be not too bad but need different IP address to be able to ARP reply with different MAC address.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!