- Jul 18, 2021
Hi,Runningon the exit node did not work.Code:
sysctl -w net.ipv4.conf.all.rp_filter=0
auto lo iface lo inet loopback iface ens3f0 inet manual iface ens3f1 inet manual mtu 9000 # WAN IP auto vmbr0 iface vmbr0 inet static address xx.xx.xx.xx/24 gateway xx.xx.xx.xx bridge-ports ens3f0 bridge-stp off bridge-fd 0 # Preparing LAN interface auto vmbr1 iface vmbr1 inet manual bridge-ports ens3f1 bridge-stp off bridge-fd 0 bridge-vlan-aware yes bridge-vids 2-4094 mtu 8900 # Attaching a VLAN on vmbr1 - I could attach many, all given by service provider Scaleway # This is the network used to create the cluster auto vmbr1.2017 iface vmbr1.2017 inet static address 10.20.17.2/24 mtu 8800 ## I also tried with this very straight forward config, but same errors occured: #auto ens3f1.2017 #iface ens3f1.2017 inet static # address 10.20.17.1/24 source /etc/network/interfaces.d/*
root@mynode1:~# pvecm status Cluster information ------------------- Name: ClusterV2 Config Version: 2 Transport: knet Secure auth: on Quorum information ------------------ Date: Mon Aug 2 18:10:43 2021 Quorum provider: corosync_votequorum Nodes: 2 Node ID: 0x00000001 Ring ID: 1.43 Quorate: Yes Votequorum information ---------------------- Expected votes: 2 Highest expected: 2 Total votes: 2 Quorum: 2 Flags: Quorate Membership information ---------------------- Nodeid Votes Name 0x00000001 1 10.20.17.1 (local) 0x00000002 1 10.20.17.2
ok. gui still need support for this, I'll try to send patch soon. (and at least, send a correct error message)
if you use vxlan, you need to lower 50bytes, so 8850 max. you can setup it in the zone, but it should also be done inside the guest. (default is 1500 in guest anyway)Could you help too about the right value of MTU. Our service provider VLAN accept 9000, should I reduce it in the zone params or somewhere else ?
HI @spirit It ended up working, thanks for the help. I don't have other nodes added to this cluster since I am still testing new features out.Hi,
I'm back from holiday.
can you try
sysctl -w net.ipv4.tcp_l3mdev_accept=1
on the exit-node, then restart ssh or pveproxy.
Then you should be able to join the exitnode ip from the vm.
(I don't known about other nodes (non exitnodes) of this cluster, do you have problem too ? because it should be routed like yours others clusters nodes.)