qcow2/replication chicken-and-egg

michabbs

Active Member
May 5, 2020
135
20
38
In order to have replication and immediate migration - zfs storage is a must.
In order to have "real" unrestricted snapshots and vm cloning feature - qcow2 storage is necessary.

Did I miss something?
Are there any plans to combine these features?
 
Hi,
In order to have replication and immediate migration - zfs storage is a must.
In order to have "real" unrestricted snapshots and vm cloning feature - qcow2 storage is necessary.
yes, to have all these features without restrictions, you currently need qcow2 on a shared storage or Ceph.
Did I miss something?
Are there any plans to combine these features?
There is an open feature request for cloning from ZFS snapshots, but there are some caveats and nobody is actively working on it AFAIK.
 
having the same issue too.

I would like to understand the problem, since:
qcow supports snapshotting (even better as zfs),
but the replication is not possible over qcow?

What is the restriction why the replication on qcow is not working at all?

The workaround with a shared storage will bringing the overall reliability down of the full system for the sake of a a failure on the shared storage.
 
Hi,
having the same issue too.

I would like to understand the problem, since:
qcow supports snapshotting (even better as zfs),
but the replication is not possible over qcow?

What is the restriction why the replication on qcow is not working at all?

The workaround with a shared storage will bringing the overall reliability down of the full system for the sake of a a failure on the shared storage.
AFAIK, there is no similar replication mechanism (with incremental snapshot support) implemented for qcow2 images at the moment. Could be implemented in principle with something like mirroring with bitmaps and persistent bitmaps, but still would require spawning a QEMU instance on the target just to do replication (which is pretty costly). There is no concrete plan at the moment.
 
Hi Fiona,

thank your for the quick reply.

Maybe I am understanding something wrong. But afaik the qcow2 is even better possible to create snapshots as zfs?

Cold standby replication
My maybe too simple assumption for a simple solution via qcow2:
  1. run filesystem
  2. create qcow2 snapshop on now state, let's name it "prxReplUUID"
  3. wait for finishing the replication
  4. replicate the snapshot prxReplUUID
  5. on success delete snapshot prxReplUUID
By the way this is the mechanism which I used on VMware since about 2009 very successfully to manually replicate snapshot states for a cold standby of the machine. Also you can take a look at GhettoVCB, which uses this mechanism to backup vmware images.

This above named mechanism would decouple the underneath filesystem and storage system fully. And for the most cases I think the most users here would be fully happy already with that simplistic approach.

If required it would be also possible to add restrictions like:
- max replication period of 5 or 15 mins between each replication run

Online failover
For the online failover, I am not sure if this meachnism would work. But I cannot see an issue as of now that it is not working in the given approach, since the ZFS replication shall work the same way. Or do I not understand something well?
Because after step 5, which maybe costly in terms of high bandwidth usage, we could do:
6. rerun from step 2. and enforce memory snapshot
7. migrate the VM to new host
 
Cold standby replication
My maybe too simple assumption for a simple solution via qcow2:
  1. run filesystem
  2. create qcow2 snapshop on now state, let's name it "prxReplUUID"
  3. wait for finishing the replication
  4. replicate the snapshot prxReplUUID
  5. on success delete snapshot prxReplUUID
The upper layer managing replication jobs/replication snapshot names/times, etc. can just be re-used.

But AFAIK there is no support for this for qcow2 on the storage layer. What exact commands would you use to achieve the replication for a given qcow2 snapshot on the storage layer? ZFS has send/receive with support for sending incremental snapshots, I'm not aware of anything similar for qcow2.
 
The upper layer managing replication jobs/replication snapshot names/times, etc. can just be re-used.

But AFAIK there is no support for this for qcow2 on the storage layer. What exact commands would you use to achieve the replication for a given qcow2 snapshot on the storage layer? ZFS has send/receive with support for sending incremental snapshots, I'm not aware of anything similar for qcow2.
can you be plz clear on your queries?

are you asking of how to create a qcow snapshot?

or do you ask how I would like to run the replication ontop of an existing snapshot?


But AFAIK there is no support for this for qcow2 on the storage layer.
if you refer here that there is no replication option on top of an existing snapshot, yes, else I assume your developer would have implemented it?

my hint was to find a solution for that case.

but to be sure, can you share me the corresponding commands you refer via zfs (snapshot and replication) and I will check on qcow level too
 
or do you ask how I would like to run the replication ontop of an existing snapshot?
Yes, you need to somehow move the actual disk changes from the source image to the target image, while the VM is running, in a way that is consistent and incremental, i.e. only move the difference between the previous and new replication snapshot.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!