Homelab: 3 nodes cluster, 2 diskless and one exposing storage via nfs

Etienne Charlier

Well-Known Member
Oct 29, 2018
63
11
48
21
Hi,

I'd like to set up an homelab to practive devops techniques ansible, terraform, kubernetes, rancher ....

I have storage homeserver
- Hardware
- core i3 2100 ( 2 cores HT)
- 12GB RAM
- 8GB SLC usb eDOM
- 4x3Tb HDD
- 1 256GB nvme SSD on a passive pci-e extension
- 1x 10GB Mellanox Connectx2
- Sofware
- Proxmox 6 (on eusb dom)
- 4x3Tb in mdadm raid 5 /dev/md0
- bcache0 made of md0 and nvme ssd
- Thin volume group
- VG used as storage for proxmox
- LV few TB mounted on /storage
- NFS server exporting /storage/DATASTORE on homeserver.
- Powers a few LXC with "selfhosted" food ( nextcloud/bitwarden...)

I have a pair of optiplex 7010 SFF
- Compute nodes
- 2x Optiplex 7010
core i7-3770
32GB ram
2.5 HDD
1x 10GB Mellanox Connectx2
- Software
- Proxmox 6 installed on 2.5 HDD
- Part of a 2 nodes cluster ( 2 optiplex)
- Qdevice located on a VM in storage node
- Connected to NFS storage exposed by Storage node

I'd like to make a 3 nodes cluster and store my vm on the NFS storage
When i'm "playing devops" I can power up my nodes and have the full power of my cluster
When I'm not playing, I'd like to save some electricity and being able to mode a few VM on the storage node and power down my compute nodes.

Would it work ?
I presume I need to remove my qdevice ?

How should I configure the cluster to allow it to work with 1 node out of 3 available ?

I understand that my storage nodes is a spof, but this a homelab, there no big deal !
 
Hi,

Would it work ?
Yes, but only under the assumption, it is a test lab and not productive.
With the following settings you brack the multimaster approach what Proxmox VE basically uses.

I presume I need to remove my qdevice ?
Yes, you must remove it.

You can give the always running node 3 votes instead of one.
Again this breaks the multimaster approach and in production will this be a problem, especially with HA.

Then you can add the two other nodes with each one vote to the main node.
 
Hi,


Yes, but only under the assumption, it is a test lab and not productive.
With the following settings you brack the multimaster approach what Proxmox VE basically uses.


Yes, you must remove it.

You can give the always running node 3 votes instead of one.
Again this breaks the multimaster approach and in production will this be a problem, especially with HA.

Then you can add the two other nodes with each one vote to the main node.
Hi Wolfgang,

Thanks for the advice on modifying the number of votes ! I'll look into it !

I made some tests during the weekend and ....

* All 3 nodes ( "master holding the storage, node01 and node02 the compute nodes) are now part of the same cluster
* I configured at cluster level a NFS storage pointing to the directory exported by the master
* It looks like of the master is "connected to itself" ( as far as NFS), It has difficulties to reboot ( takes ages, tasks stuck, watchdog not stopped messgaes). even without any vm/lxc poweron
* If I modify the access rules for this strorage and remove the master node from the list of allowed nodes, then the reboot is "normal"

Any clue?
Thanks in advance
Etienne
 
I guess the problem is the systemd sequence.
An NFS server is not part of a normal Proxmox VE environment so it will stop too early for the unmount call.
 
I dug a little bit in this dependency issue.

I noticed nfs-kernel-server was started thanks to an init.d script ( Don't remember if a followed an outdated tutorial or if it's the default package behavior,)
Long story short after having disabled the init.d script and enabled the systemd unit, it seems by master server reboot cleanly even when connected
"to itself" via nfs.

Code:
update-rc.d nfs-kernel-server disable
systemctl enable nfs-server.service

Access seems a little bit slower but not that much ( still need to proceed with a few benchmarks to size the difference between a directory storage and a "connected to self" nfs share...)


I also changed the quorum_votes to 3 for the master node in /etc/pve/corosync.conf ( cp to $HOME edit then cp backin place)

Even if it "breaks" the multi master concept, It could be an interesting configuration for people having variable load on their cluster and have high power bills !. One node always on and the others can be powered off in case of low load on the cluster ( night/week-end)

Thanks for your hints...
I keep learning this wonder full product proxmox !
Etienne
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!