Hello,
I'm currently labbing up proxmox to see if we can replace our VMWare/NSX-T deployments with it. The initial test looked really promising. Then I went to do a closer-to-production deployment, with separated management and routing networks, and it's all fallen apart. Here's the basic networking overview for what I'm trying to do.
With the cluster formed, and management configured, it was working great. Setup EVPN, BGP controllers, and it just wasn't working.. SSH'd into a hypervisor, and looked at the routing table, and realised it isn't creating a VRF/route table to do all the base EVPN routing in - it's just inserting the default routes from BGP into the base routing table, so now there are two separate sets of default routes - one out the management network, one out the public network (i.e. via the IP's of 10G-P1 and 10G-P2 on VLAN2 in the above diagram).
The basic premise we're aiming for is that the hypervisors must only be reachable on the MGMT network, and must only be able to talk outbound via the MGMT network. VM's behind EVPN must only be able to talk outbound via VLAN2 networking (or on a trunked vlan, but I'm not testing that right, now as I figure that's 'normal' functionality)
Did I miss a tickbox somewhere to tell it that the EVPN routing must be separate from the hypervisor routing? Or is this not possible with the Proxmox SDN as currently implemented? Would I be better just using VXLAN vnets, and then running a couple of VyOS VM's inside the cluster doing the BGP+EVPN part of the equation?
Thankyou!
I'm currently labbing up proxmox to see if we can replace our VMWare/NSX-T deployments with it. The initial test looked really promising. Then I went to do a closer-to-production deployment, with separated management and routing networks, and it's all fallen apart. Here's the basic networking overview for what I'm trying to do.
With the cluster formed, and management configured, it was working great. Setup EVPN, BGP controllers, and it just wasn't working.. SSH'd into a hypervisor, and looked at the routing table, and realised it isn't creating a VRF/route table to do all the base EVPN routing in - it's just inserting the default routes from BGP into the base routing table, so now there are two separate sets of default routes - one out the management network, one out the public network (i.e. via the IP's of 10G-P1 and 10G-P2 on VLAN2 in the above diagram).
The basic premise we're aiming for is that the hypervisors must only be reachable on the MGMT network, and must only be able to talk outbound via the MGMT network. VM's behind EVPN must only be able to talk outbound via VLAN2 networking (or on a trunked vlan, but I'm not testing that right, now as I figure that's 'normal' functionality)
Did I miss a tickbox somewhere to tell it that the EVPN routing must be separate from the hypervisor routing? Or is this not possible with the Proxmox SDN as currently implemented? Would I be better just using VXLAN vnets, and then running a couple of VyOS VM's inside the cluster doing the BGP+EVPN part of the equation?
Thankyou!
Last edited: