Homelab/Home Office 2-Node Proxmox Setup: When to Use PDM vs PVE-CM (Clustering)?

Sep 1, 2022
490
187
48
41
Hello!

Is there a good overview/help document somewhere discussing how to decide between using PVE's traditional clustering versus PDM?

I've got a two node PVE cluster in my home office, and am interested in managing both nodes from a single interface, shared storage, etc. I'd previously planned on setting up a cluster with a q-device, but now I'm wondering if maybe PDM would be the more lightweight, easier to manage option that would still do everything I want.

But since I'm so new at multi-node Proxmox setups, I was hoping there might be some sort of documentation or guide on how to choose.

Thanks for any advice. :)
 
If I am not mistaken, there is no "shared storage" concept with PDM. While you can migrate the VMs across two independent PVEs, that is achieved via remote-copy, not shared storage. Even ZFS replication, when managed by PVE, requires a cluster.

Since its a home lab - give both a try and see what you like best.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
In our internal documentation, a 'just' 2 node setup is not recommended. The recommendation we do,

2 PVE servers +
1 PBS server (hardware) or something else that can act as a voting node: https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_qdevice_technical_overview

In all cases we end up with 3 nodes in the corosync cluster, while only have 2 actual PVE servers.
For the storage in the case of 2 nodes we use local storage on ZFS with ZFS replication (actually works very well - make sure you have enough space for ALL vms in your cluster on a single nodes storage.)

PDM in your case you can forget, at this moment its not yet very useful for large clusters, but absolutly useless for small setups.
 
In our internal documentation, a 'just' 2 node setup is not recommended. The recommendation we do,

2 PVE servers +
1 PBS server (hardware) or something else that can act as a voting node: https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_qdevice_technical_overview

In all cases we end up with 3 nodes in the corosync cluster, while only have 2 actual PVE servers.
For the storage in the case of 2 nodes we use local storage on ZFS with ZFS replication (actually works very well - make sure you have enough space for ALL vms in your cluster on a single nodes storage.)

PDM in your case you can forget, at this moment its not yet very useful for large clusters, but absolutly useless for small setups.


Thanks for this. I can't believe I somehow forgot that I can use my PBS server as my q-device. Somehow, I randomly forget that Proxmox can do anything Debian can do (even if that's not always a good idea).

How do you set up the local storage with replication? That sounds like something I'd like to learn more about. :)
 
  • Like
Reactions: Johannes S
Thanks for this. I can't believe I somehow forgot that I can use my PBS server as my q-device. Somehow, I randomly forget that Proxmox can do anything Debian can do (even if that's not always a good idea).
Please note that after adding the qdevice it can be accessed from thr cluster via ssh which might not be what you wanted for your backups. This can be mitigated with a setup like the one described by @aaron here: https://forum.proxmox.com/threads/2-node-ha-cluster.102781/#post-442601

- Install a single-node pve and pbs on the same host, don't add it as qdevice
- Create a Debian vm or lxc on the combined pbs/pve single-node and add it as qdevice
- Now you can use the node as backup host ( you might even setup something like restic-rest-server or borgbackup in another lxc to act as target for non-pve backup data) and as qdevice without needing to worry about the security implications

How do you set up the local storage with replication? That sounds like something I'd like to learn more about. :)

https://pve.proxmox.com/wiki/Storage_Replication should cover anything, basically you need a zfs pool added as storage on both nodes with the same name. Ideally you will have a dedicated fast migration or replication network which is not used for guest or corosync traffic.

PDM in your case you can forget, at this moment its not yet very useful for large clusters, but absolutly useless for small setups.

I disagree: If one is not able to fulfill the requirements for a cluster (like a dedicated corosync network) and don't want to use the cli qm remote-migrate/pct remote-migrate commands one can use the PDC to still be able to migrate vms or lxcs between the different single-nodes.
 
Last edited:
  • Like
Reactions: UdoB
- Create a Debian vm or lxc and add it as qdevice
No ! while this technically might work it invalidates the cluster, you shouldnt have 2 votes on a single node... Might as well install qdevice on 1 of the hosts directly (is that even possible?)
I disagree: If one is not able to fulfill the requirements for a cluster (like a dedicated corosync network) and don't want to use the cli qm remote-migrate/pct remote-migrate commands one can use the PDC to still be able to migrate vms or lxcs between the different single-nodes.
What exactly do you disagree with, @SInisterPisces can build a valid 2 node cluster, and there for has no use for PDM over the in-build management.
 
No ! while this technically might work it invalidates the cluster, you shouldnt have 2 votes on a single node... Might as well install qdevice on 1 of the hosts directly (is that even possible?)
I know, that's the reason why this lxc/vm would be installed on the PBS who also Acts as single-node PVE as explained by Aaron in the referenced thread. I will esit my post to avoid such missunderstanding but did you even read the link? The idea is to avoid hiven the cluster members passwordless access to the pbs ( which one would do by adding it as qdevice)