questions about replication and zfs

anythingiwant

New Member
Nov 8, 2022
2
0
1
Hello,

I was hoping to get some clarification about zfs and replication.

my VM 104 is running on Proxmox2 in my cluster. I set up a replication task to replicate it to proxmox1 in my cluster. Is the VM on proxmox2 using the snapshot or is it safe to delete the snapshots? (this goes for all VMs). maybe i need more of an explanation of what those snapshots are in relation the the VM running.

thanks!

here is the pool on proxmox1

Code:
root@ProxMox1:~# zfs list -t all
NAME                                                      USED  AVAIL     REFER  MOUNTPOINT
zPool                                                  793G   949G      140K  /zPool
zPool@snaps                                           81.4K      -      140K  -
zPool/vm-100-disk-0                                    164G  1.01T     74.6G  -
zPool/vm-100-disk-0@__replicate_100-1_1667907003__       0B      -     74.6G  -
zPool/vm-100-disk-0@__replicate_100-2_1667910603__       0B      -     74.6G  -
zPool/vm-102-disk-0                                   3.09M   949G      145K  -
zPool/vm-102-disk-1                                   70.8G   995G     25.0G  -
zPool/vm-102-disk-2                                   6.36M   949G       93K  -
zPool/vm-103-disk-0                                   3.09M   949G      145K  -
zPool/vm-103-disk-1                                   70.8G   990G     29.6G  -
zPool/vm-103-disk-2                                   6.36M   949G       93K  -
zPool/vm-104-disk-0                                   3.26M   949G      174K  -
zPool/vm-104-disk-0@__replicate_104-0_1667916907__       0B      -      174K  -
zPool/vm-104-disk-1                                    116G  1016G     49.6G  -
zPool/vm-104-disk-1@__replicate_104-0_1667916907__       0B      -     49.6G  -
zPool/vm-104-disk-2                                   6.45M   949G       93K  -
zPool/vm-104-disk-2@__replicate_104-0_1667916907__       0B      -       93K  -
zPool/vm-2051-disk-1                                   156G  1.04T     45.7G  -
zPool/vm-2051-disk-1@__replicate_2051-0_1667901603__  12.2M      -     45.7G  -
zPool/vm-2051-disk-1@__replicate_2051-1_1667903403__     0B      -     45.7G  -
zPool/vm-4050-disk-0                                   215G   949G      197G  -
zPool/vm-4050-disk-0@mobodied                            0B      -     89.1G  -
zPool/vm-4050-disk-0@snaps                               0B      -     89.1G  -
zPool/vm-4050-disk-0@__replicate_4050-1_1667772001__   787M      -      197G  -
zPool/vm-4050-disk-0@__replicate_4050-2_1667854805__     0B      -      197G  -
root@ProxMox1:~#

here is the pool on proxmox2

Code:
root@ProxMox2:~# zfs list -t all
NAME                                                      USED  AVAIL     REFER  MOUNTPOINT
zPool                                                  719G  1023G      140K  /zPool
zPool/vm-100-disk-0                                    164G  1.09T     74.6G  -
zPool/vm-100-disk-0@__replicate_100-2_1667824201__       0B      -     74.6G  -
zPool/vm-100-disk-0@__replicate_100-1_1667907003__       0B      -     74.6G  -
zPool/vm-101-disk-0                                   66.4G  1.02T     49.4G  -
zPool/vm-101-disk-1                                   3.09M  1023G      151K  -
zPool/vm-104-disk-0                                   3.26M  1023G      174K  -
zPool/vm-104-disk-0@__replicate_104-0_1667916907__       0B      -      174K  -
zPool/vm-104-disk-1                                    116G  1.06T     49.6G  -
zPool/vm-104-disk-1@__replicate_104-0_1667916907__       0B      -     49.6G  -
zPool/vm-104-disk-2                                   6.45M  1023G       93K  -
zPool/vm-104-disk-2@__replicate_104-0_1667916907__       0B      -       93K  -
zPool/vm-2051-disk-1                                   157G  1.11T     45.7G  -
zPool/vm-2051-disk-1@__replicate_2051-1_1667817011__   508M      -     45.7G  -
zPool/vm-2051-disk-1@__replicate_2051-0_1667901603__     0B      -     45.7G  -
zPool/vm-4050-disk-0                                   215G  1023G      197G  -
zPool/vm-4050-disk-0@mobodied                            0B      -     89.1G  -
zPool/vm-4050-disk-0@snaps                               0B      -     89.1G  -
zPool/vm-4050-disk-0@__replicate_4050-2_1667854805__  47.5M      -      197G  -
zPool/vm-4050-disk-0@__replicate_4050-1_1667858406__  42.7M      -      197G  -
root@ProxMox2:~#
 
Those snapshots are the state of the disk at the time of the replication. They are required for continued replication since it will only send the changes since the last snapshot.

If you delete those, the replication will fail.
 
Those snapshots are the state of the disk at the time of the replication. They are required for continued replication since it will only send the changes since the last snapshot.

If you delete those, the replication will fail.
Thank you,

If we want to start the replication process over, as in remove all replicated data from node 1 and node 3 (not referenced above) can we remove the full dataset on 1 and 3 , then delete the snapshots on 2 and then re-run replication?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!