How to Replicate a vm between two nodes?

barrynza

Member
Dec 5, 2020
26
1
8
42
Hi There,

I'm trying to achieve a replication of a VM(id100) from Node A to Node B (and vice versa) - so when i make changes on VM that resides on NodeA it replicates that change to NodeB VM, yes i know that must be ZFS but i cant understand when in my scenario i have two nodes both nodes have a 500GB ssd where i want my VM100 to be replicated simple, but not looking simple when i receive the error saying: missing replicate feature on volume 'local-lvm:vm-100-disk-1' (500).
- So i deleted LVM on both nodes both ssds and created ZFS on each as single disk then tried to use replicate feature nope still same error.
Proxmox shouldnt make ZFS mandatory, it should be able to replicate if machines are in Cluster? Any ideas would be appreciated folks?
 
Last edited:
"Replication" in PVE does *not* mean the VM is configured on multiple nodes. It simply means that the storage will be synced across the network every couple of minutes, so if one node dies, the other can take over the VM via the HA stack. For this to work, *all* disks of a VM must be on a ZFS storage, as we use ZFS recv/send to incrementally sync the disk.

Please refer to our documentation for more.
 
"Replication" in PVE does *not* mean the VM is configured on multiple nodes. It simply means that the storage will be synced across the network every couple of minutes, so if one node dies, the other can take over the VM via the HA stack. For this to work, *all* disks of a VM must be on a ZFS storage, as we use ZFS recv/send to incrementally sync the disk.

Please refer to our documentation for more.
Thanks but that is costly, do we have any other option where i can achieve the above?
 
It is not possible to have a VM on two nodes at once. You can use shared storage, for example ceph, to allow fast failover via our HA stack though.

Please refer to the documentation I've linked above.
 
It is not possible to have a VM on two nodes at once. You can use shared storage, for example ceph, to allow fast failover via our HA stack though.

Please refer to the documentation I've linked above.
Thanks Stefan, that is understandable, HA failover is the best way to use in my situation, ceph also requires more drives to achieve the ceph efficiency. Best way is to have the vm disk on shared and use HA failover feature.
 
"Replication" in PVE does *not* mean the VM is configured on multiple nodes. It simply means that the storage will be synced across the network every couple of minutes, so if one node dies, the other can take over the VM via the HA stack. For this to work, *all* disks of a VM must be on a ZFS storage, as we use ZFS recv/send to incrementally sync the disk.

Please refer to our documentation for more.

1. What about VM with SQL server working inside. Normally if we want take a coherent snapshot of VM like this, we have to use qm --vmstate 1 for dump memory of VM (beacause of opened buffers by SQL engine and qm agent inside VM).
How works replication by corosync between two nodes VMs - Only snapshots of virtual disks are sending between nodes ?

2. How we can start replicated VMs from second node if main node failed and die ? Is any procedure? Because VMs are not visible, and VM can't start on second node (node for backup)
 
Last edited:
  • Like
Reactions: mlazorik and Mar1us
If you have HA 3-node cluster is working automaticaly. If you have 2-node backup cluster, and main (pve1) node id dead, this is procedure to wake up all vms and containers on second node (pve2) replicated earlier by corosync via ZFS incemental snapshots:
https://www.jm.technology/post/proxmox_quorum_april_2019/
https://pve.proxmox.com/wiki/Separate_Cluster_Network#Write_config_when_not_quorate
.

manually start machines from a backup node:

# run on the backup pve2, on which VM and LXC were not running, but were replicated

pvecm expected 1
systemctl stop pve-cluster
pmxcfs -l

cp /etc/pve/nodes/pve1/lxc/* /etc/pve/nodes/pve2/lxc/
cp /etc/pve/nodes/pve1/qemu-server/* /etc/pve/nodes/pve2/qemu-server

Now you can start all VMs and LXC, they will run until the reboot of the backup pve2.
After restarting pve2 will start corosync and virtualls will start automatically due to lack of quorum.

If you fix the problem of pve1 and start it up, corosync will update the configuration entries of the VMs from pve2 to pve1 and the cluster will start (the VMs will still run on pve2).
 
running a two node test cluster. proxmox over deb12 installed on straight bootable zfs. local is on zfs `rpool`. two vms, one qcow2 and one raw. in both cases, if i ask to replicate, i get `missing replicate feature on volume 'local:101/vm-101-disk-0.raw' (500)`. clearly i am missing some clue, but do not know which when there are so many :)
 
Last edited:
@randyqx You're making progress with learning Proxmox. Keep going. :)

For the replication part of things in my test lab, it seems to work well with VMs using ZFS volumes for their storage. With that set up, it uses ZFS's native snapshot functionality to do the replication.

So, it takes a snapshot of the ZFS volume (eg rpool/data/vm-100-disk-0), then uses zfs send to copy that to a matching ZFS volume on the remote server.

The ZFS snapshot thing isn't going to work with qcow2 volumes, though I have no idea if Proxmox switches to an alternative replication approach for those.

If you're wanting to keep things simple and easy though, use ZFS volumes for your replicated VMs. :)
 
Last edited:
ok, i slogged through and here is the resolve

i add a new drive, gpt one big partition, go into ZFS/CreateZFS and create a new pool 'images'.
Code:
zfspool: images
        pool images
        content rootdir,images
        mountpoint /images
        nodes sox
now, when i createVM i am not even asked image type, and i can ask for replication.

why it has to be a separate pool on a separate drive, ghu only knows and she has not talked to me so far
 
Last edited:
Ahhh. Does "proxmox over deb12" mean you manually installed Proxmox via packages, rather than using the Proxmox ISO for installation?

If that's the case, how wedded are you to the manual install approach vs using the Proxmox ISO?

Asking because if you do your installation and choose one of the "ZFS" options during the install, it'll set things up by default so you can do ZFS replication.

If the Proxmox ISO install approach really isn't a go-er, then you'll need to manually create a ZFS pool. Ideally you'll have a few drives completely empty (not even partition info), to make a ZFS pool from. If completely empty drives is also not an option, then you can create a ZFS pool from partitions on a disk, and it'll work "ok".

Just to make things easier to discuss, what's the hardware specs of your two nodes? eg cpu, ram, disk, & network card info :)
 
Last edited:
> Ahhh. "Proxmox over deb12" means you manually installed Proxmox via
> packages, rather than using the Proxmox ISO for installation yeah?

yup

> If that's the case, how wedded are you to the manual install approach
> vs using the Proxmox ISO?

ok, will look at it after a bit. a bit under water this week remotely attending
ripe88 in krakow from the US left coast; stay home and still get jet lag.

> If the Proxmox ISO install approach really isn't a go-er, then you'll
> need to manually some ZFS pools. Ideally you'll have a few drives
> completely empty (not even partition info), to make ZFS pools from.

after a day of fun, i figured out that was the model. so i mounted a
second drive, GPTed a single partition, and createZFSed it. i could
then createVMs on it and they replicate. whew!

> Just to make things easier to discuss, what's the hardware specs of
> your two nodes? eg cpu, ram, disk, & network card info :)

you really did not want to ask that :) i am testing using VMs on a
ganeti kvm/lvm based cluster.

but, as i said just above, i think i sorted this. i remain curious why i can
not use my existing zfs pool?

thanks!
 
Cool. Sounds like you have things headed in the right direction. :cool:

With that 2nd drive you added, the more optimal approach would have been to use it in the ZFS pool without creating any partition info on it.

Along these lines (for illustration purposes only):

Bash:
# zpool create -o ashift=12 -o autotrim=on mypool /dev/disk/by-id/SOME_DRIVE_UUID_HERE

That being said, having the partition info there shouldn't really hurt anything. :)

i remain curious why i can not use my existing zfs pool?

Just guessing, but it's probably a case of the existing ZFS pool not being configured to allow disk images.

In Proxmox (as far as I've learned so far) ZFS Storage is divided into two types, one for storing templates (ie ISO images and similar), and the other for storing VM and container volumes. Your existing ZFS pool might be the type that only does templates/ISOs.

You can change what stuff is allowed in your defined err... storages (DatacenterStorage[double click an entry to edit it]), but I'm pretty sure there's some specific limitations as to what's allowed where. I don't remember the fine details though, as I don't personally tend to use the ISOs/templates/etc stuff much. Pretty much a user of just ZFS volumes, in all kinds of weird and interesting ways. ;)
 
Last edited:
Oh, for a two node cluster you're going to need to defang the watchdog too, as per this:

Please don't do this, the HA stack works the way it does to prevent data corruption which can easily happen if HA is enabled and you remove the quorum requirement.

If you want to run a 2-node cluster you need to setup a QDevice. You can read more about QDevices and HA fencing at our documentation at [1] and [2], respectively.

[1] https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support
[2] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#ha_manager_fencing
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!