Hi guys
Can we have a PROXMOX host management server control seperate OS instances/virtual machines if these servers are in two different directly connected networks , so if the server is in one network can it reach the other network and can it still segment the VM on remote servers ?
We want to start using PROXMOX VE on our new to design network.
But we before we make the seperation of the two networks we want to know if this is possible.
Inside proxmox we want to us only KVM. (without the use of containers with OpenVZ)
Most of our virtual machines run linux distro debian ... also the proxmox hosts are installed on Debian.
1) We want to create two networks , Each network will have 2x WS-C4900M core routers/switches.
2) Each core is connected to two Cisco Nexus Parent switches (N5020)
3) Each N5k-C5020 is connected to 4 to 2 racks in which we have 2 Nexus extenders (connected to the parent switches with SFP+), we talk about 10-15 servers per rack ,so each server connected to each extender in the rack (redundancy purpose).
I have read about Open vSwitch a virtual multilayer switch .. which works with PROXMOX and gets installed on the debian distro's, does this do the trick ?
Open vSwitch :
Open vSwitch is designed to support transparent distribution across multiple physical servers by enabling creation of cross-server switches in a way that abstracts out the underlying server architecture, similarly to the VMware vNetwork distributed vswitch or Cisco Nexus 1000V.
Open vSwitch can operate both as a software-based switch running within the virtual machine (VM) hypervisors, and as the control stack for dedicated switching hardware; as a result, it has been ported to multiple virtualization platforms and switching chipsets. Open vSwitch is the default switch in XenServer since its version 6.0 and in the Xen Cloud Platform (via its XAPI management toolstack); it also supports Xen, Linux KVM, Proxmox VE and VirtualBox. Open vSwitch has also been integrated into various cloud computing software platforms and virtualization management systems, including OpenStack, openQRM, OpenNebula and oVirt.
Linux kernel's implementation of Open vSwitch was merged into the Linux kernel mainline in kernel version 3.3, which was released on March 18, 2012, official Linux packages are available for Debian, Fedora and Ubuntu.
As of November 2013, Open vSwitch provides the following features:
Exposed communication between virtual machines, via NetFlow, sFlow, IPFIX, SPAN, RSPAN, and mirrors tunneled via Generic Routing Encapsulation (GRE)
Link aggregation through the Link Aggregation Control Protocol (LACP, IEEE 802.1AX-2008) LACP is used for creation of a port channel, the extender and parent support port-channel but also this open vSwitch support it when needed.
Standard 802.1Q Virtual LAN (VLAN) model for network partitioning, with support for trunking
Support for the Bidirectional Forwarding Detection (BFD) and 802.1ag link monitoring
Support for the Spanning Tree Protocol (STP, IEEE 802.1D-1998)
Fine-grained quality of service (QoS) control for different applications, users, or data flows
Support for the Hierarchical fair-service curve (HFSC) queuing discipline (qdisc)
Traffic policing at the level of virtual machine interface
Network interface controller (NIC) bonding, with load balancing by source MAC addresses, active backups, and layer 4 hashing
Support for the OpenFlow protocol, including various virtualization-related extensions
Complete IPv6 (Internet Protocol version 6) support
Support for multiple tunneling protocols: GRE, Virtual Extensible LAN (VXLAN), Internet Protocol Security (IPsec), GRE, and VXLAN over IPsec
Remote configuration protocol, with existing bindings for the C and Python programming languages
Implementation of the packet forwarding engine in kernel space or user space, allowing additional flexibility
Multi-table forwarding pipeline with a flow-caching engine
Forwarding layer abstraction, making it easier to port Open vSwitch to new software and hardware platforms
http://en.wikipedia.org/wiki/Open_vSwitch
Thanks for any suggestion.
BR
Tim
Can we have a PROXMOX host management server control seperate OS instances/virtual machines if these servers are in two different directly connected networks , so if the server is in one network can it reach the other network and can it still segment the VM on remote servers ?
We want to start using PROXMOX VE on our new to design network.
But we before we make the seperation of the two networks we want to know if this is possible.
Inside proxmox we want to us only KVM. (without the use of containers with OpenVZ)
Most of our virtual machines run linux distro debian ... also the proxmox hosts are installed on Debian.
1) We want to create two networks , Each network will have 2x WS-C4900M core routers/switches.
2) Each core is connected to two Cisco Nexus Parent switches (N5020)
3) Each N5k-C5020 is connected to 4 to 2 racks in which we have 2 Nexus extenders (connected to the parent switches with SFP+), we talk about 10-15 servers per rack ,so each server connected to each extender in the rack (redundancy purpose).
I have read about Open vSwitch a virtual multilayer switch .. which works with PROXMOX and gets installed on the debian distro's, does this do the trick ?
Open vSwitch :
Open vSwitch is designed to support transparent distribution across multiple physical servers by enabling creation of cross-server switches in a way that abstracts out the underlying server architecture, similarly to the VMware vNetwork distributed vswitch or Cisco Nexus 1000V.
Open vSwitch can operate both as a software-based switch running within the virtual machine (VM) hypervisors, and as the control stack for dedicated switching hardware; as a result, it has been ported to multiple virtualization platforms and switching chipsets. Open vSwitch is the default switch in XenServer since its version 6.0 and in the Xen Cloud Platform (via its XAPI management toolstack); it also supports Xen, Linux KVM, Proxmox VE and VirtualBox. Open vSwitch has also been integrated into various cloud computing software platforms and virtualization management systems, including OpenStack, openQRM, OpenNebula and oVirt.
Linux kernel's implementation of Open vSwitch was merged into the Linux kernel mainline in kernel version 3.3, which was released on March 18, 2012, official Linux packages are available for Debian, Fedora and Ubuntu.
As of November 2013, Open vSwitch provides the following features:
Exposed communication between virtual machines, via NetFlow, sFlow, IPFIX, SPAN, RSPAN, and mirrors tunneled via Generic Routing Encapsulation (GRE)
Link aggregation through the Link Aggregation Control Protocol (LACP, IEEE 802.1AX-2008) LACP is used for creation of a port channel, the extender and parent support port-channel but also this open vSwitch support it when needed.
Standard 802.1Q Virtual LAN (VLAN) model for network partitioning, with support for trunking
Support for the Bidirectional Forwarding Detection (BFD) and 802.1ag link monitoring
Support for the Spanning Tree Protocol (STP, IEEE 802.1D-1998)
Fine-grained quality of service (QoS) control for different applications, users, or data flows
Support for the Hierarchical fair-service curve (HFSC) queuing discipline (qdisc)
Traffic policing at the level of virtual machine interface
Network interface controller (NIC) bonding, with load balancing by source MAC addresses, active backups, and layer 4 hashing
Support for the OpenFlow protocol, including various virtualization-related extensions
Complete IPv6 (Internet Protocol version 6) support
Support for multiple tunneling protocols: GRE, Virtual Extensible LAN (VXLAN), Internet Protocol Security (IPsec), GRE, and VXLAN over IPsec
Remote configuration protocol, with existing bindings for the C and Python programming languages
Implementation of the packet forwarding engine in kernel space or user space, allowing additional flexibility
Multi-table forwarding pipeline with a flow-caching engine
Forwarding layer abstraction, making it easier to port Open vSwitch to new software and hardware platforms
http://en.wikipedia.org/wiki/Open_vSwitch
Thanks for any suggestion.
BR
Tim