unclear about setting up replication

Tommmii

Well-Known Member
Jun 11, 2019
66
12
48
53
Hello,

I'm trying to get Replication working between 2 PVE nodes. (I also have a Raspberry Pi in the network configured as Qdevice.)

After doing a replication, the LCX doesn't show up on the other PVE, see image :

1582295492731.png

What am I missing ?

This is the log output of my first attempt at replicating a LXC :

Code:
2020-02-21 15:16:42 203-0: start replication job
2020-02-21 15:16:42 203-0: guest => CT 203, running => 1
2020-02-21 15:16:42 203-0: volumes => zfs-containers:subvol-203-disk-0
2020-02-21 15:16:43 203-0: freeze guest filesystem
2020-02-21 15:16:43 203-0: create snapshot '__replicate_203-0_1582294602__' on zfs-containers:subvol-203-disk-0
2020-02-21 15:16:44 203-0: thaw guest filesystem
2020-02-21 15:16:44 203-0: incremental sync 'zfs-containers:subvol-203-disk-0' (__replicate_203-0_1582294440__ => __replicate_203-0_1582294602__)
2020-02-21 15:16:45 203-0: zfs-pool/subvol-203-disk-0@__replicate_203-0_1582294440__    name    zfs-pool/subvol-203-disk-0@__replicate_203-0_1582294440__    -
2020-02-21 15:16:45 203-0: send from @__replicate_203-0_1582294440__ to zfs-pool/subvol-203-disk-0@__replicate_203-0_1582294602__ estimated size is 523K
2020-02-21 15:16:45 203-0: total estimated size is 523K
2020-02-21 15:16:45 203-0: TIME        SENT   SNAPSHOT zfs-pool/subvol-203-disk-0@__replicate_203-0_1582294602__
2020-02-21 15:16:46 203-0: delete previous replication snapshot '__replicate_203-0_1582294440__' on zfs-containers:subvol-203-disk-0
2020-02-21 15:16:47 203-0: (remote_finalize_local_job) delete stale replication snapshot '__replicate_203-0_1582294440__' on zfs-containers:subvol-203-disk-0
2020-02-21 15:16:47 203-0: end replication job

Code:
root@pve:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-5.3: 6.1-5
pve-kernel-helper: 6.1-5
pve-kernel-5.0: 6.0-11
pve-kernel-4.15: 5.4-6
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.3.13-2-pve: 5.3.13-2
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.21-4-pve: 5.0.21-9
pve-kernel-4.15.18-18-pve: 4.15.18-44
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.14-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-12
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-19
pve-docs: 6.1-6
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-10
pve-firmware: 3.0-5
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
root@pve:~#
Code:
root@pve2:~# pveversion -v
proxmox-ve: 6.1-2 (running kernel: 5.3.18-2-pve)
pve-manager: 6.1-7 (running version: 6.1-7/13e58d5e)
pve-kernel-5.3: 6.1-5
pve-kernel-helper: 6.1-5
pve-kernel-5.0: 6.0-11
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.18-1-pve: 5.3.18-1
pve-kernel-5.0.21-5-pve: 5.0.21-10
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.14-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-12
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-4
libpve-storage-perl: 6.1-4
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-19
pve-docs: 6.1-6
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-10
pve-firmware: 3.0-5
pve-ha-manager: 3.0-8
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-3
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-6
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1
root@pve2:~#
Code:
root@pve:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content rootdir,iso
        maxfiles 0
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

zfspool: zfs-containers
        pool zfs-pool
        content rootdir
        sparse 0

zfspool: vm-disks
        pool zfs-pool/vm-disks
        content images
        sparse 1

dir: usb-backup
        path /mnt/usb-backup
        content backup
        maxfiles 3
        shared 0

dir: zfs-iso
        path /zfs-pool/iso
        content iso,vztmpl
        nodes pve
        shared 0
Code:
root@pve2:~# pveversion -v
root@pve2:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content rootdir,iso
        maxfiles 0
        shared 0

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

zfspool: zfs-containers
        pool zfs-pool
        content rootdir
        sparse 0

zfspool: vm-disks
        pool zfs-pool/vm-disks
        content images
        sparse 1

dir: usb-backup
        path /mnt/usb-backup
        content backup
        maxfiles 3
        shared 0

dir: zfs-iso
        path /zfs-pool/iso
        content iso,vztmpl
        nodes pve
        shared 0

root@pve2:~#
 
After doing a replication, the LCX doesn't show up on the other PVE, see image :
The configuration of a CT or VM can only ever be on one node, that is a basic principle of Proxmox VE and it's multi-master design- - a node owns it's guests so to say.

THus you won't see any replicated config there, the config doesn't needs to be replicated anyway as it's on the in realtime shared clustered configuratoion filesystem under /etc/pve

So, lets say, if you do this for HA (once you have your QDevice node), the ha manager can recover the CT on the still working node after the other one was (self)fenced and starts from the last snapshot which was replicated.

You can see the ZFS replicated snapshots though on the other node, e.g., with zfs list -t snapshot
 
Can I understand that the container doesn't show up on PVE2, because PVE (the originating node) is still in the cluster ?

And, if for some reason (artificial or accidental) PVE disappears, then PVE2 _will_ offer up it's last known version of the replicated LXC ?

Indeed the snapshot did replicate :
Code:
root@pve2:/etc/pve# zfs list -t snapshot
NAME                                                        USED  AVAIL     REFER  MOUNTPOINT
zfs-pool/share@syncoid_pve_2020-02-21:14:26:12                0B      -     5.63T  -
zfs-pool/subvol-203-disk-0@upgrade                          935M      -     7.43G  -
zfs-pool/subvol-203-disk-0@__replicate_203-0_1582298220__   266K      -     7.34G  -
root@pve2:/etc/pve#

the config doesn't needs to be replicated anyway as it's on the in realtime shared clustered configuratoion filesystem under /etc/pve
Can I verify this on PVE2 ?
Like so i guess :
Code:
root@pve2:/etc/pve# find . -name 203.conf
./nodes/pve/lxc/203.conf
 
Last edited:
Can I understand that the container doesn't show up on PVE2, because PVE (the originating node) is still in the cluster ?

Yes.

And, if for some reason (artificial or accidental) PVE disappears, then PVE2 _will_ offer up it's last known version of the replicated LXC ?
The state is there, yes. But the CT does not magically get moved.
For a HA manage setup it will get recovered and started again automatically.
If not you could check what's up with the lost node and if you decide that it'd be OK for starting the CT you can always move the config of the CT over manually and start it then, because of the replication it should pick off from the last state received.
 
OK for starting the CT you can always move the config of the CT over manually
...move it over, being :
Code:
 root@pve2:/etc/pve# mv ./nodes/pve/lxc/203.conf ./lxc/203.conf
After which the LXC should be present in the web interface.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!