[SOLVED] 2 Node ZFS replication not using link0

dsh

Well-Known Member
Jul 6, 2016
45
3
48
34
I have configured two node cluster using web gui. I set up replication from pve1 -> pve2.

cluster config.png

In cluster config I setup link0 as main link, however when I test it's using link1.
I downloaded 500mb file on virtual machine in pve1 and measured bandwith using iftop. It's definitely using link1.

iftop.png

corosync.conf

logging {
debug: off
to_syslog: yes
}

nodelist {
node {
name: pve1
nodeid: 1
quorum_votes: 1
ring0_addr: 10.0.0.51
ring1_addr: 192.168.1.51
}
node {
name: pve2
nodeid: 2
quorum_votes: 1
ring0_addr: 10.0.0.52
ring1_addr: 192.168.1.52
}
}

quorum {
provider: corosync_votequorum
}

totem {
cluster_name: cluster1
config_version: 2
interface {
linknumber: 0
}
interface {
linknumber: 1
}
ip_version: ipv4-6
link_mode: passive
secauth: on
version: 2
}



Is it bug? or did configure it wrong?
 
Last edited:
I thought link0 would be used "storage replication" and migration. I set 10.0.0.x network as only connection for corosync but when I migrate VMs or create "storage replication", it still goes through 192.168.1.x network.

How can I make migration, storage replication go through specific address/interface?
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!