We got two KVM guests, one ubuntu 20.04 LTS and one debian v10.
one has a 100GB disk, using roughly 54 GB, the other a 240GB disk, using roughly 146GB.
both are replicated using pvesr, both basically have static content in their filesystems (lots of images never changed).
zfs list (and -t snapshot) reports for the debian guest:
replicaA/vm-130024-disk-0 454G 13.4T 138G -
replicaA/vm-130024-disk-0@__replicate_130024-0_1674417744__ 58.8G - 196G -
on the active node, while on the replication target it reports
replicaA/vm-130024-disk-0 454G 6.01T 196G -
replicaA/vm-130024-disk-0@__replicate_130024-0_1674417744__ 0B - 196G -
anyhow, replication is usually done within seconds.
the ubuntu guest is reported as follows on the active node:
replicaA/vm-130040-disk-0 193G 13.3T 50.7G -
replicaA/vm-130040-disk-0@__replicate_130040-0_1674417822__ 40.0G - 90.1G -
and on its replication target:
replicaA/vm-130040-disk-0 193G 3.26T 90.1G -
replicaA/vm-130040-disk-0@__replicate_130040-0_1674417822__ 0B - 90.1G -
this one takes like up to 30 minutes to replicate.
any hints on how to investigate such an issue would be appreciated.
also I'd be interested if such a usage value (like 454G for a 240G disk - basically 100% more) is normal and should be accounted for storage-size planning or if there is any sound way to keep zfs usage at least near the size of those virtual disks...
one has a 100GB disk, using roughly 54 GB, the other a 240GB disk, using roughly 146GB.
both are replicated using pvesr, both basically have static content in their filesystems (lots of images never changed).
zfs list (and -t snapshot) reports for the debian guest:
replicaA/vm-130024-disk-0 454G 13.4T 138G -
replicaA/vm-130024-disk-0@__replicate_130024-0_1674417744__ 58.8G - 196G -
on the active node, while on the replication target it reports
replicaA/vm-130024-disk-0 454G 6.01T 196G -
replicaA/vm-130024-disk-0@__replicate_130024-0_1674417744__ 0B - 196G -
anyhow, replication is usually done within seconds.
the ubuntu guest is reported as follows on the active node:
replicaA/vm-130040-disk-0 193G 13.3T 50.7G -
replicaA/vm-130040-disk-0@__replicate_130040-0_1674417822__ 40.0G - 90.1G -
and on its replication target:
replicaA/vm-130040-disk-0 193G 3.26T 90.1G -
replicaA/vm-130040-disk-0@__replicate_130040-0_1674417822__ 0B - 90.1G -
this one takes like up to 30 minutes to replicate.
any hints on how to investigate such an issue would be appreciated.
also I'd be interested if such a usage value (like 454G for a 240G disk - basically 100% more) is normal and should be accounted for storage-size planning or if there is any sound way to keep zfs usage at least near the size of those virtual disks...