I have read through the documentation and I'm not sure I'm following it well because I'm not even sure if what I want to accomplish is possible with Proxmox or not so I thought I'd ask here.
I have a Proxmox node setup and hosting several containers and VMs. This started as a hobby and chance to learn, and as I have experimented with various services friends and family have started using them and I'm starting to realize that I need to increase my reliability before something catastrophic happens.
My main node has a 6TB ZFS pool made up of three mirrored sets of 2TB M.2 nvme drives. This setup already saved my hide once when one of the drives failed and I was able to get it replaced under warranty and the pool rebuilt without any hitch in operation. However, I decided I needed something more so I setup two more lower powered nodes, each with a 6TB spinning disc, and created a cluster to join them to my primary node. I experimented with a few old laptops first and was able to get VMs and containers to work with High Availability using ceph, testing by the simple expedient of disconnecting network cables from nodes and watching what happened. However, now that I have this system setup it appears that it won't be so simple.
With my main array being ZFS, is ceph even an option? If it is, is it the best option?
My ultimate goal is to have live copies of several guest containers and VMs present on the 6TB drive on both node2 and node3, the idea being that if Node1 goes down for some reason, or even if I want to test a Proxmox update on one node before pushing it to all of them, then the guests will quickly restore on either one or the other of node2 and node3, or better yet balance between them, and when node1 comes back up the services will be migrated back to the fast hardware again. I don't mind doing this last step manually if needed but would prefer automation.
What would others here recommend that I do to achieve this with the hardware I have to work with, or did I go about this all wrong?
Thanks,
I have a Proxmox node setup and hosting several containers and VMs. This started as a hobby and chance to learn, and as I have experimented with various services friends and family have started using them and I'm starting to realize that I need to increase my reliability before something catastrophic happens.
My main node has a 6TB ZFS pool made up of three mirrored sets of 2TB M.2 nvme drives. This setup already saved my hide once when one of the drives failed and I was able to get it replaced under warranty and the pool rebuilt without any hitch in operation. However, I decided I needed something more so I setup two more lower powered nodes, each with a 6TB spinning disc, and created a cluster to join them to my primary node. I experimented with a few old laptops first and was able to get VMs and containers to work with High Availability using ceph, testing by the simple expedient of disconnecting network cables from nodes and watching what happened. However, now that I have this system setup it appears that it won't be so simple.
With my main array being ZFS, is ceph even an option? If it is, is it the best option?
My ultimate goal is to have live copies of several guest containers and VMs present on the 6TB drive on both node2 and node3, the idea being that if Node1 goes down for some reason, or even if I want to test a Proxmox update on one node before pushing it to all of them, then the guests will quickly restore on either one or the other of node2 and node3, or better yet balance between them, and when node1 comes back up the services will be migrated back to the fast hardware again. I don't mind doing this last step manually if needed but would prefer automation.
What would others here recommend that I do to achieve this with the hardware I have to work with, or did I go about this all wrong?
Thanks,