Hello here. I hope to get a help with my setup. I'm trying to move one by one to keep the complexity handled.
So far I have a cluster of three nodes (bare metal). Each node has only single NIC with assigned /29 network to it. So each node can have up to 5 IP addresses.
They communicate over public IP (1-2 hops between nodes)
I plan to host client VMs and each client should have each own private independent network. I accomplished this by running an EVPN controller and having VNet per client. It works fine. Also, for each vnet I launch a simple VM with dnsmasq to serve as a DHCP server.
Overall it works fine. The only little change I had to do is static routes to the peer nodes in /etc/network/interfaces:
Now, I have a question: how can I assign a public IP address to a specific client VM? Do I do it with firewall or via FRR? Do I assign an extra NIC to the VM? I gathered a setup of requirements:
* Ability to dedicate a public IP to a VM, so all requests to the IP goes directly to VM
* VM might be hosted on another node (since we have VXLan, should be good)
* All outgoing traffic from the VM should be as dedicated IP. curl http://ifconfig.me/ip should show the assigned IP
Before I starting re-inventing the wheel, are there straightforward solutions? Is there a known keyword to search for? I believe a separate NIC wouldn't work because of the second requirement - VM might be on a different node. Also I'm afraid I need to tickle with DHCP and default gateway, so outgoing traffic comes from that IP.
Overall, it looks like AWS Elastic IP assignment to EC2 instances.
Also, I don't know if it matters. But VMs can ping each other even if they are on different nodes, but node A can't ping VMs hosted on the other nodes (only hosted on node A).
So far, my SDN config looks like this:
Thank you for attention
So far I have a cluster of three nodes (bare metal). Each node has only single NIC with assigned /29 network to it. So each node can have up to 5 IP addresses.
They communicate over public IP (1-2 hops between nodes)
I plan to host client VMs and each client should have each own private independent network. I accomplished this by running an EVPN controller and having VNet per client. It works fine. Also, for each vnet I launch a simple VM with dnsmasq to serve as a DHCP server.
Overall it works fine. The only little change I had to do is static routes to the peer nodes in /etc/network/interfaces:
Code:
post-up ip route add 63.141.x.x/32 via 69.197.x.x
post-up ip route add 63.141.x.x/32 via 69.197.x.x
Now, I have a question: how can I assign a public IP address to a specific client VM? Do I do it with firewall or via FRR? Do I assign an extra NIC to the VM? I gathered a setup of requirements:
* Ability to dedicate a public IP to a VM, so all requests to the IP goes directly to VM
* VM might be hosted on another node (since we have VXLan, should be good)
* All outgoing traffic from the VM should be as dedicated IP. curl http://ifconfig.me/ip should show the assigned IP
Before I starting re-inventing the wheel, are there straightforward solutions? Is there a known keyword to search for? I believe a separate NIC wouldn't work because of the second requirement - VM might be on a different node. Also I'm afraid I need to tickle with DHCP and default gateway, so outgoing traffic comes from that IP.
Overall, it looks like AWS Elastic IP assignment to EC2 instances.
Also, I don't know if it matters. But VMs can ping each other even if they are on different nodes, but node A can't ping VMs hosted on the other nodes (only hosted on node A).
So far, my SDN config looks like this:
Code:
root@nocix-kz-1:/home/customer# cat /etc/pve/sdn/*
evpn: primary
asn 65000
peers 69.197.xxx.xx,63.141.xxx.xx,63.141.xxx.xxx
subnet: primary-10.1.0.0-22
vnet platform
gateway 10.1.0.1
snat 1
vnet: platform
zone primary
tag 101
evpn: primary
controller primary
vrf-vxlan 100
exitnodes nocix-kz-3,nocix-kz-2,nocix-kz-1
ipam pve
mac BC:24:11:3A:B9:6E
Thank you for attention
Last edited: