you can use same ASN for both evpn && bgp
put all your exit-nodes
indeed you don't need static route if you use bgp.
The route is announced by all exit-nodes (even if the vm is not on this node, even if you define exit-node on each).
So, in all case, you'll have ecmp. If a packet in coming...
Technically, you need to have a route in your fortigate to the evpn subnets with the exit-nodes as gateway.
This route can be static or received through bgp (in this case, you need to define bgp controllers on proxmox on the exit-nodes with the fortigate as peer, to announce through bgp the...
Hi,
They are a regression in frr when pve8 has been released, and "match evpn vni x" was not working anymore,
so I have replaced it by "match ip address prefix-list only_default"
https://git.proxmox.com/?p=pve-network.git;a=commit;h=e614da43f13e3c61f9b78ee9984364495eff91b6
I think this is...
if you want to add an ip on a sdn vnet on a speficic host,
you can edit:
/etc/network/interfaces
iface <vnet>
address ....
I'll be merged with the sdn configuration.
The exit-node is only a node announcing default 0.0.0.0 evpn type-5 route. (so every node is forwding outside traffic to exit-node in evpn, and the exit-node is routing again between evpn network and default vrf through classic bgp)
if you peers all nodes nodes directly in bgp from their vrf...
in the proxmox evpn sdn, each zone is a different vrf.
evpn peering need to be done in the default vrf.
extra bgp peering could be done in vrf, but it's not currently implemented. (we do it in default vrf, and leak routes from tenants evpn vrf).
about /etc/frr/frr.conf, you can create this...
interesting. In your usecase, the SVI for br_ceph is different on each host ?
another tunning possible:
# sysctl -wq net.ipv4.fib_multipath_hash_fields=0x0037
# sysctl -wq net.ipv4.fib_multipath_hash_policy=3
0x0001 Source IP address
0x0002 Destination IP address
0x0004 IP...
if you use layer3 spine/leaf architecture, you need to use vxlan or evpn. (evpn is vxlan + bgp) to create a virtual layer2 network on top, as you want to be able to move/live migrate vm between hosts, and share same subnet/ips between hosts.
nothing is needed on real switch. you just need...
a true spine-leaf is only layer3 (routed), so you can't use vlan. you need to use vxlan overlay if you want to propagate same subnet across the differents leafs.
Now, if by "spine-leaf", you mean a classic architecture with core-access switchs with layer2, you can use vlan...
we are using a pure spine/leaf architecture in our datacenter. (only layer3, with point to point switch && bgp between switch && proxmox hosts with dual nic balanced with ecmp).
I don't have support for evpn on my spine && leaf, I'm only doing evpn between our proxmox nodes && our main routers...
I'm using it in production, on differents subnets (but private network).
how much latency do you have between your hosts through the public network ? (just do a ping).
Then, you need to check with "vtysh -c "show bgp summary" that bgp is established and routes are exchanged. (nodes just need...
we are in 2024, you can use small m2 nvme dc grade with plp.
like DC1000M U.2 NVMe SSD. (50€ for 1TB)
you can also use them in hardware raid1 . (Dell, Hp, supermicro,lenovo,... have all small internal controllers with 2 nvme in raid1).
Proxmox use a distributed/replicated filesystem for...
where you create the pool, they are an option to directly add it as storage with same name than the pool (in /etc/pve/storage.cfg).
It's just a shortcut, you can do it manually if you want. (create a rbd storage using the pool).
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.