Proxmox 5.0 replication

oh and
Code:
systemctl start pvesr.timer
 
Hello,

Thanks, do i have to do the commands on each node ? or only on the node that don't work ?


Now on the node that work i have this message for all my scheduled replication :

end replication job with error: command 'set -o pipefail && pvesm export VM-STOCKAGE:vm-102-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_102-0_1499866500__ | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=svr-07-hve' root@172.18.251.237 -- pvesm import VM-STOCKAGE:vm-102-disk-1 zfs - -with-snapshots 1' failed: exit code 255
 
Hello,

Thanks, do i have to do the commands on each node ? or only on the node that don't work ?

The replication seems to launch successfully on the node that didn't work

in the log button of GUi i have this :

2017-07-12 15:39:00 100-0: start replication job
2017-07-12 15:39:00 100-0: guest => VM 100, running => 18439
2017-07-12 15:39:00 100-0: volumes => VM-STOCKAGE:vm-100-disk-1,VM-STOCKAGE:vm-100-disk-2
2017-07-12 15:39:01 100-0: create snapshot '__replicate_100-0_1499866740__' on VM-STOCKAGE:vm-100-disk-1
2017-07-12 15:39:02 100-0: create snapshot '__replicate_100-0_1499866740__' on VM-STOCKAGE:vm-100-disk-2
2017-07-12 15:39:02 100-0: full sync 'VM-STOCKAGE:vm-100-disk-1' (__replicate_100-0_1499866740__)
 
On the node that does not work before, the second vm does not replicate with this error message on log button

2017-07-12 15:55:01 106-0: end replication job with error: zfs error: cannot create snapshot 'FOG/vm-106-disk-1@__replicate_106-0_1499867700__': out of space

I don't understand because on the second node there is enough free space. The zpool "Fog" is the same on the two nodes, 1To on each.
 
Is it because FOG volume group is almost full on the node where the VM is lauched ?

Does it need more free space on working node to make the snapshot before sending it to the other node ?
 
me again, today except FOG VM with space problem, all the others VM on the 2 nodes fails. Same message for all :

end replication job with error: command 'set -o pipefail && pvesm export VM-STOCKAGE:vm-100-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_100-0_1499928780__ | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=svr-09-hve' root@172.18.251.239 -- pvesm import VM-STOCKAGE:vm-100-disk-1 zfs - -with-snapshots 1' failed: exit code 255
 
can you post the output of
Code:
zfs list -t all
 
end replication job with error:
the error message sadly is not enough to know what exactly the problem was, you need to look into the replication log
 
()
2017-07-13 10:23:00 100-0: start replication job
2017-07-13 10:23:00 100-0: guest => VM 100, running => 5336
2017-07-13 10:23:00 100-0: volumes => VM-STOCKAGE:vm-100-disk-1,VM-STOCKAGE:vm-100-disk-2
2017-07-13 10:23:01 100-0: create snapshot '__replicate_100-0_1499934180__' on VM-STOCKAGE:vm-100-disk-1
2017-07-13 10:23:02 100-0: create snapshot '__replicate_100-0_1499934180__' on VM-STOCKAGE:vm-100-disk-2
2017-07-13 10:23:02 100-0: full sync 'VM-STOCKAGE:vm-100-disk-1' (__replicate_100-0_1499934180__)
2017-07-13 10:23:04 100-0: delete previous replication snapshot '__replicate_100-0_1499934180__' on VM-STOCKAGE:vm-100-disk-1
2017-07-13 10:23:04 100-0: delete previous replication snapshot '__replicate_100-0_1499934180__' on VM-STOCKAGE:vm-100-disk-2
2017-07-13 10:23:05 100-0: end replication job with error: command 'set -o pipefail && pvesm export VM-STOCKAGE:vm-100-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_100-0_1499934180__ | /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=svr-09-hve' root@172.18.251.239 -- pvesm import VM-STOCKAGE:vm-100-disk-1 zfs - -with-snapshots 1' failed: exit code 255
 
Добрый день собираю кластер из двух нод
Вопрос как настроить отказоустойчивость в случае падения какой-либо виртуальной машины, чтобы она "подхватывалась" со второй ноды ?
 
Please post in English only.
 
I also have a problem with replication. Here's the log:

2017-12-15 14:42:00 8888-0: start replication job
2017-12-15 14:42:03 8888-0: guest => VM 8888, running => 5685
2017-12-15 14:42:03 8888-0: volumes => local-zfs:vm-8888-disk-1,local-zfs:vm-8888-disk-2
2017-12-15 14:42:04 8888-0: create snapshot '__replicate_8888-0_1513345320__' on local-zfs:vm-8888-disk-1
2017-12-15 14:42:04 8888-0: create snapshot '__replicate_8888-0_1513345320__' on local-zfs:vm-8888-disk-2
2017-12-15 14:42:04 8888-0: incremental sync 'local-zfs:vm-8888-disk-1' (__replicate_8888-0_1510307449__ => __replicate_8888-0_1513345320__)
2017-12-15 14:42:05 8888-0: internal error: Invalid argument
2017-12-15 14:42:05 8888-0: command 'zfs send -Rpv -I __replicate_8888-0_1510307449__ -- rpool/data/vm-8888-disk-1@__replicate_8888-0_1513345320__' failed: got signal 6
2017-12-15 14:42:05 8888-0: rpool/data/vm-8888-disk-1@__replicate_8888-0_1510307449__ name rpool/data/vm-8888-disk-1@__replicate_8888-0_1510307449__ -
2017-12-15 14:42:05 8888-0: cannot receive: failed to read from stream
2017-12-15 14:42:05 8888-0: command 'zfs recv -F -- rpool/data/vm-8888-disk-1' failed: exit code 1
2017-12-15 14:42:05 8888-0: delete previous replication snapshot '__replicate_8888-0_1513345320__' on local-zfs:vm-8888-disk-1
2017-12-15 14:42:05 8888-0: delete previous replication snapshot '__replicate_8888-0_1513345320__' on local-zfs:vm-8888-disk-2
2017-12-15 14:42:06 8888-0: end replication job with error: command 'set -o pipefail && pvesm export local-zfs:vm-8888-disk-1 zfs - -with-snapshots 1 -snapshot __replicate_8888-0_1513345320__ -base __replicate_8888-0_1510307449__ | /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve-5-9-153-37' root@10.1.1.1 -- pvesm import local-zfs:vm-8888-disk-1 zfs - -with-snapshots 1 -base __replicate_8888-0_1510307449__' failed: exit code 255

First Node: zfs list -t all
rpool/data 1.20T 1.42T 96K /rpool/data
rpool/data/vm-200-disk-1 888M 1.42T 888M -
rpool/data/vm-202-disk-1 63.9G 1.42T 42.6G -
rpool/data/vm-202-disk-1@__replicate_202-0_1510307400__ 21.3G - 43.1G -
rpool/data/vm-202-disk-2 621G 1.42T 613G -
rpool/data/vm-202-disk-2@__replicate_202-0_1510307400__ 8.31G - 585G -
rpool/data/vm-211-disk-1 10.9G 1.42T 8.51G -
rpool/data/vm-211-disk-1@__replicate_211-0_1510307422__ 2.37G - 8.45G -
rpool/data/vm-301-disk-1 31.0G 1.42T 31.0G -
rpool/data/vm-8888-disk-1 86.9G 1.42T 71.4G -
rpool/data/vm-8888-disk-1@__replicate_8888-0_1510307449__ 15.5G - 71.9G -
rpool/data/vm-8888-disk-2 419G 1.42T 409G -
rpool/data/vm-8888-disk-2@__replicate_8888-0_1510307449__ 9.18G - 402G -
rpool/swap 8.50G 1.42T 4.17G -

Second Node: zfs list -t all
rpool/data 6.07T 8.15T 91.0G /rpool/data
rpool/data/base-999-disk-1 11.3G 8.15T 11.3G -
rpool/data/base-999-disk-1@__base__ 11.6K - 11.3G -
rpool/data/vm-100-disk-1 1.23G 8.15T 1.23G -
rpool/data/vm-101-disk-1 97.1G 8.15T 97.1G -
rpool/data/vm-101-disk-2 4.24T 8.15T 4.24T -
rpool/data/vm-102-disk-1 2.39G 8.15T 2.39G -
rpool/data/vm-111-disk-1 14.7G 8.15T 14.7G -
rpool/data/vm-202-disk-1 62.6G 8.15T 62.6G -
rpool/data/vm-202-disk-1@__replicate_202-0_1510307400__ 0B - 62.6G -
rpool/data/vm-202-disk-2 850G 8.15T 850G -
rpool/data/vm-202-disk-2@__replicate_202-0_1510307400__ 0B - 850G -
rpool/data/vm-210-disk-1 17.5G 8.15T 17.5G -
rpool/data/vm-211-disk-1 12.3G 8.15T 12.3G -
rpool/data/vm-211-disk-1@__replicate_211-0_1510307422__ 0B - 12.3G -
rpool/data/vm-222-disk-1 93K 8.15T 93K -
rpool/data/vm-2222-disk-1 13.4G 8.15T 13.4G -
rpool/data/vm-400-disk-1 12.3G 8.15T 12.3G -
rpool/data/vm-8888-disk-1 105G 8.15T 105G -
rpool/data/vm-8888-disk-1@__replicate_8888-0_1510307449__ 0B - 105G -
rpool/data/vm-8888-disk-2 585G 8.15T 585G -
rpool/data/vm-8888-disk-2@__replicate_8888-0_1510307449__ 0B - 585G -
rpool/swap 9.62G 8.15T 9.62G -
root@pve-5-9-153-37:~#

Many thanks for the help
TD
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!