Hello everyone,
We are currently evaluating using Proxmox for our development environment, and possibly homolgation and production if it all goes well.
Right now I have a cluster with 6 servers, connected to our servers VLAN. Since we have several separated development teams, some using containers, some wanting to have a "private" network for their Kubernetes clusters, and some using old plain products (Apache, Tomcat etc) on plain VMs, we have a basic requirement to provide separate networks ( /24 segments are more than enough for each team), that may or may not need to see each other, and all of them must see the basic shared services (Oracle, mail server, file server) that run on physical servers on the servers VLAN (the same VLAN of the Proxmox servers).
To make it simple, I tought the easiest way would be implementing Vxlans using Proxmox SDN. I would implement a new Vxlan, assign a network range to it, and assign the network interfaces of the VMs to those vxlans as needed, using cloud-init and shell scripts to automate the deplyment of these VMs.
To address the problem of defining a single gateway for a VM (for them to reach the physical network outside the Proxmox hosts), i created a small VM on each host and used the keepalived daemon to join all the interfaces from both sides, creating a VRRP virtual router address for both the external interfaces and the internal interfaces connected to the various vxlans.
I hope the following makes it clear:
Enabling internal kernel routing on these VMs ("keepalive/VRRP" VMs), all the networks would be able to reach each other and the external network. Access would be controlled using the embedded firewall on each interface of the VMs and iptables.
Since I'm not by any means a network expert (I'm more a servers admin), I would kindly ask your opinions about this. Is this overkill? Is there an easier way to give the developers separate internal networks? This solutions fits well with our physical servers since they use fast 10Gbe interfaces for the cluster interconnect, and the UDP packets suporting the Vxlans travel fast from one server to another.
Any help/suggestion/correction will be appreciated. Thank you!
We are currently evaluating using Proxmox for our development environment, and possibly homolgation and production if it all goes well.
Right now I have a cluster with 6 servers, connected to our servers VLAN. Since we have several separated development teams, some using containers, some wanting to have a "private" network for their Kubernetes clusters, and some using old plain products (Apache, Tomcat etc) on plain VMs, we have a basic requirement to provide separate networks ( /24 segments are more than enough for each team), that may or may not need to see each other, and all of them must see the basic shared services (Oracle, mail server, file server) that run on physical servers on the servers VLAN (the same VLAN of the Proxmox servers).
To make it simple, I tought the easiest way would be implementing Vxlans using Proxmox SDN. I would implement a new Vxlan, assign a network range to it, and assign the network interfaces of the VMs to those vxlans as needed, using cloud-init and shell scripts to automate the deplyment of these VMs.
To address the problem of defining a single gateway for a VM (for them to reach the physical network outside the Proxmox hosts), i created a small VM on each host and used the keepalived daemon to join all the interfaces from both sides, creating a VRRP virtual router address for both the external interfaces and the internal interfaces connected to the various vxlans.
I hope the following makes it clear:
Enabling internal kernel routing on these VMs ("keepalive/VRRP" VMs), all the networks would be able to reach each other and the external network. Access would be controlled using the embedded firewall on each interface of the VMs and iptables.
Since I'm not by any means a network expert (I'm more a servers admin), I would kindly ask your opinions about this. Is this overkill? Is there an easier way to give the developers separate internal networks? This solutions fits well with our physical servers since they use fast 10Gbe interfaces for the cluster interconnect, and the UDP packets suporting the Vxlans travel fast from one server to another.
Any help/suggestion/correction will be appreciated. Thank you!