I don't use the proxmox firewall and have it turned off on the Datacenter and Node Level I think.ok, got it. do you use proxmox firewall on theses nodes ? (I'm not sure from where is coming the tcp reset). The routing seem to be ok.
I don't use the proxmox firewall and have it turned off on the Datacenter and Node Level I think.ok, got it. do you use proxmox firewall on theses nodes ? (I'm not sure from where is coming the tcp reset). The routing seem to be ok.
Maybe try on the exit node : sysctl -w net.ipv4.conf.all.rp_filter=0I don't use the proxmox firewall and have it turned off on the Datacenter and Node Level I think.
RunningMaybe try on the exit node : sysctl -w net.ipv4.conf.all.rp_filter=0
I ll be back from holiday next week, and i ll do more tests
sysctl -w net.ipv4.conf.all.rp_filter=0
Hi,Runningon the exit node did not work.Code:sysctl -w net.ipv4.conf.all.rp_filter=0
auto lo
iface lo inet loopback
iface ens3f0 inet manual
iface ens3f1 inet manual
mtu 9000
# WAN IP
auto vmbr0
iface vmbr0 inet static
address xx.xx.xx.xx/24
gateway xx.xx.xx.xx
bridge-ports ens3f0
bridge-stp off
bridge-fd 0
# Preparing LAN interface
auto vmbr1
iface vmbr1 inet manual
bridge-ports ens3f1
bridge-stp off
bridge-fd 0
bridge-vlan-aware yes
bridge-vids 2-4094
mtu 8900
# Attaching a VLAN on vmbr1 - I could attach many, all given by service provider Scaleway
# This is the network used to create the cluster
auto vmbr1.2017
iface vmbr1.2017 inet static
address 10.20.17.2/24
mtu 8800
## I also tried with this very straight forward config, but same errors occured:
#auto ens3f1.2017
#iface ens3f1.2017 inet static
# address 10.20.17.1/24
source /etc/network/interfaces.d/*
root@mynode1:~# pvecm status
Cluster information
-------------------
Name: ClusterV2
Config Version: 2
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Mon Aug 2 18:10:43 2021
Quorum provider: corosync_votequorum
Nodes: 2
Node ID: 0x00000001
Ring ID: 1.43
Quorate: Yes
Votequorum information
----------------------
Expected votes: 2
Highest expected: 2
Total votes: 2
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
0x00000001 1 10.20.17.1 (local)
0x00000002 1 10.20.17.2
ok. gui still need support for this, I'll try to send patch soon. (and at least, send a correct error message)Hi @spirit !
Thank you, it works. Sorry for that non-sense of mine, I indeed put a VLAN ID in the VM NIC options, and it's actually not forbidden.
if you use vxlan, you need to lower 50bytes, so 8850 max. you can setup it in the zone, but it should also be done inside the guest. (default is 1500 in guest anyway)Could you help too about the right value of MTU. Our service provider VLAN accept 9000, should I reduce it in the zone params or somewhere else ?
Thanks again
HI @spirit It ended up working, thanks for the help. I don't have other nodes added to this cluster since I am still testing new features out.Hi,
I'm back from holiday.
can you try
sysctl -w net.ipv4.tcp_l3mdev_accept=1
on the exit-node, then restart ssh or pveproxy.
Then you should be able to join the exitnode ip from the vm.
(I don't known about other nodes (non exitnodes) of this cluster, do you have problem too ? because it should be routed like yours others clusters nodes.)
delete sdn subnet object failed: cannot delete subnet '10.26.0.0/24', not empty (500)
grep
of the vnet it's supposed to use in /etc/pve/nodes but it seems no guest is using it.Hi,
Trying to remove a subnet in SDN, with following error :
delete sdn subnet object failed: cannot delete subnet '10.26.0.0/24', not empty (500)
I did a greatgrep
of the vnet it's supposed to use in /etc/pve/nodes but it seems no guest is using it.
Any idea ?
Edit: I deleted all entries directly in /etc/pve/sdn/subnets.cfg and it worked. Is the error expected behavior ?
root@ahuntz:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.128-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
pve-kernel-5.4: 6.4-5
pve-kernel-helper: 6.4-5
pve-kernel-5.4.128-1-pve: 5.4.128-1
pve-kernel-5.4.124-1-pve: 5.4.124-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 3.0.0-1+pve4~bpo10
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.1.0-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-network-perl: 0.6.0
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.3-1
proxmox-backup-client: 1.1.12-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.6-1
pve-cluster: 6.4-1
pve-container: 3.3-6
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.5-pve1~bpo10+1
ok, this is a bug fixed in 0.6.1, but it's only for proxmox7. (new updates will be only provide for proxmox7 as it still beta)root@ahuntz:~# pveversion -v proxmox-ve: 6.4-1 (running kernel: 5.4.128-1-pve) pve-manager: 6.4-13 (running version: 6.4-13/9f411e79) pve-kernel-5.4: 6.4-5 pve-kernel-helper: 6.4-5 pve-kernel-5.4.128-1-pve: 5.4.128-1 pve-kernel-5.4.124-1-pve: 5.4.124-2 ceph-fuse: 12.2.11+dfsg1-2.1+b1 corosync: 3.1.2-pve1 criu: 3.11-3 glusterfs-client: 5.5-3 ifupdown: residual config ifupdown2: 3.0.0-1+pve4~bpo10 ksm-control-daemon: 1.3-1 libjs-extjs: 6.0.1-10 libknet1: 1.20-pve1 libproxmox-acme-perl: 1.1.0 libproxmox-backup-qemu0: 1.1.0-1 libpve-access-control: 6.4-3 libpve-apiclient-perl: 3.1-3 libpve-common-perl: 6.4-3 libpve-guest-common-perl: 3.1-5 libpve-http-server-perl: 3.2-3 libpve-network-perl: 0.6.0 libpve-storage-perl: 6.4-1 libqb0: 1.0.5-1 libspice-server1: 0.14.2-4~pve6+1 lvm2: 2.03.02-pve4 lxc-pve: 4.0.6-2 lxcfs: 4.0.6-pve1 novnc-pve: 1.1.0-1 openvswitch-switch: 2.12.3-1 proxmox-backup-client: 1.1.12-1 proxmox-mini-journalreader: 1.1-1 proxmox-widget-toolkit: 2.6-1 pve-cluster: 6.4-1 pve-container: 3.3-6 pve-docs: 6.4-2 pve-edk2-firmware: 2.20200531-1 pve-firewall: 4.1-4 pve-firmware: 3.2-4 pve-ha-manager: 3.1-1 pve-i18n: 2.3-1 pve-qemu-kvm: 5.2.0-6 pve-xtermjs: 4.7.0-3 qemu-server: 6.4-2 smartmontools: 7.2-pve2 spiceterm: 3.1-1 vncterm: 1.6-2 zfsutils-linux: 2.0.5-pve1~bpo10+1
&& No gateway were defined on the subnet
so, firewall problem ? can you give more details ?Hi guyz,
Glad to talk here again (but maybe I shouldn't?)
I just integrated 2 nodes in a cluster with SDN. The first one have been a little capricious, and a reboot (or proper firewall rulez 0 : - )) deployed my SDN conf on it.
no, you don't need to create /etc/network/interfaces.d/sdn manually.The second one is still stuck on pending status, despite reboot, may clic on deploy, restart pve-cluster.
The IP of that node, on the right network (which is workingly pinging its pairs) is in the peer list of my ZONES.
I did not want to create /etc/network/interfaces.d/sdn manually, but maybe I should ? Thinking that might not solve the problem... (though WD40 is really helping my old truck to start when it's a bit hard ^^)
Actually I did it... and of course didn't solve the fact that while applying the network, my node is not in the list of `reloadnetworkall` tasks, but in the list in SDN hosts, pending status.
libpve-network-perl and ifupdown2 are installed.
Forgot something ?
Hi @spirit,
I've been optimistic, thinking I solved the first node SDN problem with missing firewall rules. Actually, there are unconsistency in our firewall conf, since nodes without the rules supposedly needed, get the SDN deployed locally. Seems somewhere in the doc, a colleague told me cluster functioning rules are not to be defined manuallay, but managed by the cluster, agreed ?
mmm, that's strange, it's like the configuration generation is not called on the second node.Yes, the line 'source...' in interfaces is there, and I deleted the sdn file put manually : it's not getting updated anyway while push another deploy.
And no error in the task list while applying SDN deploy, but indeed no line for our second node.
It worked \o/"'pvesh set /nodes/$secondnodename/network"
mmm, I really don't understand, sorry ... the apply button really only launch this command for each node, nothing else ...It worked \o/
And now "apply" does work for that node too.
Thanks a lot !
But... I'd like to understand where/how it got messed. An idea ?
ii frr 7.5.1-1.1 amd64 FRRouting suite of internet protocols (BGP, OSPF, IS-IS, ...)
ii frr-pythontools 7.5.1-1.1 all FRRouting suite - Python tools