Read through the documentation for replication between 2 nodes that are not in a cluster using pve-zsync. I have this working in a lab setup but want to verify what I am doing and make sure what I am doing is the best way to accomplish what I need to do. I have a 2 node esxi environment that I am going to need to migrate to something else, looking at xcp-ng and proxmox. This supports a small business and not located in any datacenter but have 2 servers with a server in 2 different on site closets. For small businesses its not really from an architecture standpoint to be able to run a cluster. I think a lot of folks simplify this and run clusters with q devices but the reality is for a small business with nodes separated for redundancy reasons there are so many single points of failure that its impossible to always have 2 nodes reachable. Currently I have 2 esxi nodes and use veeam to replicate from one node to another and backup all the vm's. This has served me well and works for what is needed for this small business. If I were to lose a closet, major power issues, network switch etc, most services are duplicated across nodes except for one file share which then could be brought up via the veeam replicated vm. This is what I am needing to try to implement with the alternatives that I am looking into and have working with proxmox.
Currently running pve-zsync on one node and replicating to the other node and it appears to work well.
pve-zsync list
SOURCE NAME STATE LAST SYNC TYPE CON
100 testzsync1 ok 2025-07-06_09:15:01 qemu ssh
The data is copied and then every 15min is sync'd to the other node but only copies the disk part. From reading it appears that you also need to copy the vm configuration to the other node which I did with scp:
scp /etc/pve/qemu-server/100.conf root@x.x.x.x:/etc/pve/qemu-server/100.conf
One of the issues that I see with the SCP command that unless I add it to a nightly cron job its a one time thing, if I make any changes to the VM config options, while rare, those changes would not be copied unless I somehow add it to a nightly/daily cron job. The other thing that I read is if I need to bring up the vm on the other host, I would need to ensure that the replication job has been stopped. Depending on what kind of failure that forced the outage, this may be simple thing to do or something I need to ensure that I have done before I bring the other node back online.
I just wanted to make sure what I have above sounds correct and have an additional questions. Once I am able to bring the original node back up I assume that I could do a one time synch BACK to the original node. I haven't tried that but I assume that would be possible since I would want the application back on the original node.
Some comments which I know have been brought up before but want to echo thier statements is concerning about the VMID's. I understand partially why they took the approach they did but would have been nice for the storage to have vmid+name etc. When you replicate to the other node without the configuration part you only have the vmid labeled storage and unless I am missing something there is no way to know on that node what the name is associated with it. I know this probably is not an issue with clusters etc, but for non cluster environments (smaller businesses) this just makes things more difficult when trying to track what is what when replicating. Just my commentary, it is what it is kind of thing but really would make managing things for us small guys much easier.
Still have to get UPS and shutdown working but since veeam officially supports proxmox now that satisfies the backup requirement. Any input on what I am doing in regards to the replication would be great, thanks!
Currently running pve-zsync on one node and replicating to the other node and it appears to work well.
pve-zsync list
SOURCE NAME STATE LAST SYNC TYPE CON
100 testzsync1 ok 2025-07-06_09:15:01 qemu ssh
The data is copied and then every 15min is sync'd to the other node but only copies the disk part. From reading it appears that you also need to copy the vm configuration to the other node which I did with scp:
scp /etc/pve/qemu-server/100.conf root@x.x.x.x:/etc/pve/qemu-server/100.conf
One of the issues that I see with the SCP command that unless I add it to a nightly cron job its a one time thing, if I make any changes to the VM config options, while rare, those changes would not be copied unless I somehow add it to a nightly/daily cron job. The other thing that I read is if I need to bring up the vm on the other host, I would need to ensure that the replication job has been stopped. Depending on what kind of failure that forced the outage, this may be simple thing to do or something I need to ensure that I have done before I bring the other node back online.
I just wanted to make sure what I have above sounds correct and have an additional questions. Once I am able to bring the original node back up I assume that I could do a one time synch BACK to the original node. I haven't tried that but I assume that would be possible since I would want the application back on the original node.
Some comments which I know have been brought up before but want to echo thier statements is concerning about the VMID's. I understand partially why they took the approach they did but would have been nice for the storage to have vmid+name etc. When you replicate to the other node without the configuration part you only have the vmid labeled storage and unless I am missing something there is no way to know on that node what the name is associated with it. I know this probably is not an issue with clusters etc, but for non cluster environments (smaller businesses) this just makes things more difficult when trying to track what is what when replicating. Just my commentary, it is what it is kind of thing but really would make managing things for us small guys much easier.
Still have to get UPS and shutdown working but since veeam officially supports proxmox now that satisfies the backup requirement. Any input on what I am doing in regards to the replication would be great, thanks!