Proxmox 5.0 replication

Benoit

Renowned Member
Jan 17, 2017
55
0
71
42
Hello all,

I upgrade Proxmox 4.4 to proxmox 5.0 to have native replication and not by using command line pve-zsync.

I use on each node local storage.

With pve-zsync command line tool, i choose where i want to replicate the vm :

pve-zsync create --source 10.10.10.2:105 --dest VM-STOCKAGE --verbose --maxsnap 2 --name svr-17-hve


But with the GUI i don't see where the replicate goes ... ????

On each node i have those storage :

volume group "FOG"
volume group "VM-STOCKAGE"
local
local-lvm
sauvegarde (on synology NAS)

When i initiate replication from node 1 to node 2 for VM id100 that is stored on VM-STOCKAGE on node 1, does the replicate go to VM-STOCKAGE on node 2 ?

upload_2017-7-10_10-13-31.png
 
When i initiate replication from node 1 to node 2 for VM id100 that is stored on VM-STOCKAGE on node 1, does the replicate go to VM-STOCKAGE on node 2 ?
yes currently, replication uses the same storage as the source disk(s)
 
is there a way to limit the amount of RAM used by replication ?

On my nodes, i have one 10gb fiber network card that is direct link between the nodes, how can i be sure that replication data is transfered through thoses cards ?
 
I have a replication problem on my nodes ...

On the first node everything is ok ... replication is working properly to the other node for my 5 VM

On the second one, i can't do replication, in log i have : "unable to open file - No such file or directory" for the 2 VM

can you explain me why ?
 
can you post the vm config and the content of /etc/pve/replication.cfg ?
 
Here is 106.conf one of the vm that don't replicate



boot: cdn
bootdisk: virtio0
cores: 2
ide2: local:iso/ubuntu-16.04.1-server-amd64.iso,media=cdrom,size=667M
memory: 4096
name: svr-12-hve
net0: virtio=FE:20:E9:9A:E1:75,bridge=vmbr7
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=72a975fe-54c0-468c-9127-777bd3606f93
sockets: 1
virtio0: FOG:vm-106-disk-1,size=850G




Here is replication.cfg


local: 104-0
target svr-07-hve
schedule */2:00

local: 101-0
target svr-07-hve
schedule */1:00

local: 103-0
target svr-07-hve

local: 107-0
target svr-07-hve
schedule */2:00

local: 102-0
target svr-07-hve
schedule */2:00

local: 105-0
target svr-07-hve
schedule */2:00

local: 106-0
target svr-09-hve
schedule sun 01:00

local: 100-0
target svr-09-hve
schedule */5
 
can you also please post your /etc/pve/storage.cfg and the full log with the error message?
 
Here is the /etc/pve/storage.cfg

dir: local
path /var/lib/vz
content vztmpl,iso
maxfiles 1
shared 0

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

zfspool: VM-STOCKAGE
pool VM-STOCKAGE
content rootdir,images
sparse 0

zfspool: FOG
pool FOG
content images,rootdir
sparse 0

dir: sauvegarde-proxmox
path /mnt/NAS_HVE
content backup
maxfiles 2
shared 0


Where do i find the replication log file ?
 
Last edited:
you can press the 'log' button in the gui and copy/paste it here. or where did you see the error?

On the second one, i can't do replication, in log i have : "unable to open file - No such file or directory" for the 2 VM


OK ! this is what i wrote on previous message, in the GUI in Log button i have this :

"unable to open file - No such file or directory"

Same message for the two VM on node svr-07-hve
On the node svr-09-hve everything is ok, all my VM are replicated
 
oh ok, it seems the replication did not run yet, what does it say was/is the last/next replication?
 
It never wants to replicate. When i try to launch it manually, same error message
 
can you post the output of
Code:
pvesr status
(from the node where it does not work)
 
JobID Enabled Target LastSync Next Sync Duration FailCount State
100-0 Yes local/svr-09-hve - pen ding - 0 OK
106-0 Yes local/svr-09-hve - pen ding - 0 OK
 
ok, i need more output (sry)

can you post the output of
Code:
systemctl list-timers
systemctl status pvesr
systemctl status pvesr.timer
 
root@svr-07-hve:~# systemctl list-timers
NEXT LEFT LAST PASSED U
Wed 2017-07-12 14:32:55 CEST 51min left Tue 2017-07-11 14:32:55 CEST 23h ago s
Thu 2017-07-13 04:31:56 CEST 14h left Wed 2017-07-12 07:34:14 CEST 6h ago a
Thu 2017-07-13 06:24:23 CEST 16h left Wed 2017-07-12 06:37:55 CEST 7h ago a

3 timers listed.
Pass --all to see loaded but inactive timers, too.
lines 1-7/7 (END)...skipping...
NEXT LEFT LAST PASSED UNIT ACTIVATES
Wed 2017-07-12 14:32:55 CEST 51min left Tue 2017-07-11 14:32:55 CEST 23h ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Thu 2017-07-13 04:31:56 CEST 14h left Wed 2017-07-12 07:34:14 CEST 6h ago apt-daily.timer apt-daily.service
Thu 2017-07-13 06:24:23 CEST 16h left Wed 2017-07-12 06:37:55 CEST 7h ago apt-daily-upgrade.timer apt-daily-upgrade.service

3 timers listed.
Pass --all to see loaded but inactive timers, too.
 
root@svr-07-hve:~# systemctl status pvesr
● pvesr.service - Proxmox VE replication runner
Loaded: loaded (/lib/systemd/system/pvesr.service; static; vendor preset: enabled)
Active: inactive (dead)
 
root@svr-07-hve:~# systemctl status pvesr.timer
● pvesr.timer - Proxmox VE replication runner
Loaded: loaded (/lib/systemd/system/pvesr.timer; disabled; vendor preset: enabled)
Active: inactive (dead)
 
ok it seems the pvesr timer is not enabled on that node, try
Code:
systemctl enable pvesr.timer