Hello,
I've got a couple of Proxmox nodes that aren't connected in any way yet. They're both Intel 12th Gen CPU-based: one is a 12700T mini PC, and the other is an N100 SBC. Their hardware is very much not identical, but use the same CPU family (though the 12700T has a more powerful iGPU).
I'd like to manage them from a single interface. I was planning to set up a cluster (I realize I'd need to be careful with HA-based live migration due to the disparate hardware on each node), but at this point I'm wondering if the Proxmox Datacenter Manager might be the better choice given that they're not identical nodes.
One thing I was looking forward to when I clustered them was being able to define backup jobs and external storage and SDN configurations in one place for both nodes without having to keep track of identical configs on two separate nodes--I'm not sure those are things Datacenter Manager can do yet?
So unless there's a really good reason to avoid it, I think a cluster is still be the best way to go for unified management?
Note: I'm was planning to set up a logging server and RAM-based logging on my nodes to try to mitigate the additional boot disk I/O that clustering introduces, and also run a separate q-device, so I'd already planned on mitigating two of the bigger pain points with clustering. PDM wasn't a thing when I started considering how to set this up.
I'd appreciate any advice. Thanks!
I've got a couple of Proxmox nodes that aren't connected in any way yet. They're both Intel 12th Gen CPU-based: one is a 12700T mini PC, and the other is an N100 SBC. Their hardware is very much not identical, but use the same CPU family (though the 12700T has a more powerful iGPU).
I'd like to manage them from a single interface. I was planning to set up a cluster (I realize I'd need to be careful with HA-based live migration due to the disparate hardware on each node), but at this point I'm wondering if the Proxmox Datacenter Manager might be the better choice given that they're not identical nodes.
One thing I was looking forward to when I clustered them was being able to define backup jobs and external storage and SDN configurations in one place for both nodes without having to keep track of identical configs on two separate nodes--I'm not sure those are things Datacenter Manager can do yet?
So unless there's a really good reason to avoid it, I think a cluster is still be the best way to go for unified management?
Note: I'm was planning to set up a logging server and RAM-based logging on my nodes to try to mitigate the additional boot disk I/O that clustering introduces, and also run a separate q-device, so I'd already planned on mitigating two of the bigger pain points with clustering. PDM wasn't a thing when I started considering how to set this up.
I'd appreciate any advice. Thanks!