Beta packages for Proxmox VE 3.2 - pvetest

I don't remember in which post I read somebody said that openvz integration would slow the adoption of the new kernel and somebody else spoke about a possible proxmox with kvm technology...

Inviato dal mio GT-I9195 utilizzando Tapatalk
 
I'm not wild about Java, but it works ok. We are a Mac-based company and went with Proxmox because we could administer our cluster without having to fire up a VM and run Windows. There is no Spice Mac client that works so don't exclude us with future updates.
 
Hello,

I tried the pve-kernel-3.10.0-1-pve but experienced a lot of error messages about IOMMU. I briefly searched around and it seemed as if the 3.10 kernel will integrate PCI-Passthru differently (using VFIO).
Is there anything different I should do for me to try out the 3.10 kernel with PCI-Passthru, or is this a feature that still in the works and best to wait until it's release in the pve-no-subscription repo?

Thanks!
 
hey, just a short question... are there I210 Intel Gigabit NIC drivers included in this kernel?

Thanks!
 
hey, just a short question... are there I210 Intel Gigabit NIC drivers included in this kernel?

Thanks!


we have latest igb drivers, see:

Code:
root@hp1:~# modinfo igb
filename:       /lib/modules/2.6.32-27-pve/kernel/drivers/net/igb/igb.ko
version:        5.1.2
license:        GPL
description:    Intel(R) Gigabit Ethernet Network Driver
author:         Intel Corporation, <e1000-devel@lists.sourceforge.net>
srcversion:     BD399F3F5A5D6E155F2ED26
 
we have latest igb drivers, see:

Code:
root@hp1:~# modinfo igb
filename:       /lib/modules/2.6.32-27-pve/kernel/drivers/net/igb/igb.ko
version:        5.1.2
license:        GPL
description:    Intel(R) Gigabit Ethernet Network Driver
author:         Intel Corporation, <e1000-devel@lists.sourceforge.net>
srcversion:     BD399F3F5A5D6E155F2ED26

great!
 
Guys,

Thanks for the massive updates. I am having to wait for people to shutdown VMs so I can't yest upgrade and test, but I have some questions about the new features.


1) Open vSwitch

I am hoping this solves or will eventually solve the one problem I have had making Proxmox a production system over VMware. The issues are as follows and I would like to know if Open vSwitch does not or should eventually enable these features in Proxmox.

1a) The ability to have internal virtual network interfaces that can only be used by VMs. If this is possible, do you have plans to make this work with clusters. e.g. A VM running on Proxmox Server 1 that is connected to an internal only network can, using this network, communicate with a VM running on Proxmox Server 2 that is connected to that same network.

1b) The ability to add external networks without rebooting the system or entire network stack. Lets take two scenarios assuming that my server has 4 network ports and I currently have 1 connected to a network.

If I add a VLAN to that port for VMs to use I should not have to reboot the server or the network stack in a way the interrupts active VMs.

If I connect one of the other ports to a network and I do not want to add an IP to it for the host OS, but use it for VMs and/or to create VLAN interfaces for VMs I should not have to reboot the server or the network stack in a way the interrupts active VMs.

I would like to see a no reboot server or network stack for any changes, but that would probably require kernel modifications and/or some hefty scripting using ip addr to add, change, replace, delete, etc. rather than ifconfig for changes to everything but the primary Proxmox access IP, so I do not see this happening using the standard Linux network stack.

1c) Permission system to control who can access which networks. e.g. If i build a network for client A, I do not want client B to be able to access client B's network.

1d) Hopefully you will allow firewall controls in the future. Being able to protect VMs from each other is needed for some deployments. If the routing table on the Proxmox will not bypass firewalls then this would be fine, but if not, having a firewalls at the vSwitch layer is needed. Here are some good and bad examples. If either or both of the good examples work now with vSwitch, then we are OK.

Bad:
<VM1: 1.1.1.1> --> |
| <Proxmox Server Network Stack>
<VM2: 2.2.2.2> <---

Good:
<VM1: 1.1.1.1> --> <Proxmox Server Network Stack> --> Router/Firewall |
<VM2: 2.2.2.2> <--------------------------------------------------------

Good:
<VM1: 1.1.1.1> --> <internal vSwitch network> --> <Router/Firewall running an a VM; 5.5.5.5> | <Proxmox Server Network Stack>
<VM2: 2.2.2.2> <----------------------------------------------------------------------------


2) Spice replacing VNC

Perhaps I am confused on what is happening. My impression is that the VNC console was being replaced by a Spice console that ran in the browser like the VNC console, indicated by the name spiceterm. However, from the posts in this thread I gather that the Spice console that is replacing the VNC console is the same spice console we have now requiring a application on the system from which I am connecting to the VM. if this is the case I have to say that is disappointing and would rather keep the VNC even with a popup security warning. I would like to have a console option that runs on any platform without all the bells and whistles for emergencies (e.g. a car without power locks, windows, mirrors and no radio, but will run on fuel obtainable anywhere), for emergencies and an electric car with all the nice great features, rather than only an electric car with all the nice great features.

Maybe it's not possible to use TightVNC with all it's features when accessing a console rather than having it installed in the OS running in the VM, but I can tell you the TightVNC java client launched via a browser provides copy and paste between the host and remote OS and has buttons for crtl+alt+del, start menu and lock keys for ctrl and alt.

One of my favorite things about Proxmox compared to VMware is that management is OS independent. I can configure all settings with a web browser and in some cases an SSH client which I can access on any client OS including phone OSes. The VM consoles needed during the initial setup are java based which also runs on any or at least many platforms. I run Linux as my primary desktop OS and it is rather annoying that even through VMware is based on Linux, they do not provide full management controls for any OS especially Linux. I don't want Proxmox to be the same. Somethign to note: Enterprise network appliances like firewalls, switches and routers made by Cisco, Juniper and others, load balancers made with Citrix and others, etc. use java based clients for management on any platform.

Hopefully I have just misinterpreted posts in this forum and there will continue to be a console option that does not require a nonstandard or non-platform independent application to be installed on the OS accessing the VM console.

Thanks again for all the hard work Proxmox team.

Rhongomiant
 
1a) The ability to have internal virtual network interfaces that can only be used by VMs.

it's possible for a single host to have a bridge without physical interface
If this is possible, do you have plans to make this work with clusters. e.g. A VM running on Proxmox Server 1 that is connected to an internal only network can, using this network, communicate with a VM running on Proxmox Server 2 that is connected to that same network.
proxmox1 -> proxmox 2/3 will be never supported.

But can't you used a dedicated vlan for theses internal network ?

Alternatively, it can be done with openvswitch and vxlan. Maybe linux bridge with vxlan too (kernel 3.10), but I'm not sure.
With openvswitch, each host can have a GRE tunnel between each openvswitch.


1b) The ability to add external networks without rebooting the system or entire network stack. Lets take two scenarios assuming that my server has 4 network ports and I currently have 1 connected to a network.

If I add a VLAN to that port for VMs to use I should not have to reboot the server or the network stack in a way the interrupts active VMs.

If I connect one of the other ports to a network and I do not want to add an IP to it for the host OS, but use it for VMs and/or to create VLAN interfaces for VMs I should not have to reboot the server or the network stack in a way the interrupts active VMs.

I would like to see a no reboot server or network stack for any changes, but that would probably require kernel modifications and/or some hefty scripting using ip addr to add, change, replace, delete, etc. rather than ifconfig for changes to everything but the primary Proxmox access IP, so I do not see this happening using the standard Linux network stack.

dynamic vlan (without reboot) are already implemented in current proxmox. (kvm only), just setup the vlan tag in guest network interface.
Only bridge create/delete through proxmox gui need reboot. (but you can do it manually with command line and /etc/network/interfaces without reboot)

1c) Permission system to control who can access which networks. e.g. If i build a network for client A, I do not want client B to be able to access client B's network.

we have talked about this on the dev mailing list, maybe for 2014

1d) Hopefully you will allow firewall controls in the future. Being able to protect VMs from each other is needed for some deployments. If the routing table on the Proxmox will not bypass firewalls then this would be fine, but if not, having a firewalls at the vSwitch layer is needed. Here are some good and bad examples. If either or both of the good examples work now with vSwitch, then we are OK.

firewall feature should come really soon, proxmox 3.3 I think. But for bridge only currently. (not openvswitch yet, because you can't use iptables)
 
Last edited by a moderator:
I just checked my pve-no-subscription installation for updates and was wondering why there are so many new packages. E.g. new QEMU 1.7 seems to be already in pve-no-subscription repo. I thought the new packages were only available if pvetest repo is available!?

Code:
# apt-cache policy pve-qemu-kvm
pve-qemu-kvm:
  Installed: 1.4-17
  Candidate: 1.7-4
  Version table:
     1.7-4 0
        500 [URL]http://download.proxmox.com/debian/[/URL] wheezy/pve-no-subscription amd64 Packages
 *** 1.4-17 0
        500 [URL]http://download.proxmox.com/debian/[/URL] wheezy/pve-no-subscription amd64 Packages
        100 /var/lib/dpkg/status
 
Thanks for responding spirit.


it's possible for a single host to have a bridge without physical interface


Well I guess I am not enough of an outside the box thinker to have thought to create a bridge without giving it a physical interface. However, the problem exists that every time I created one of these a reboot would be required. Not a good option on a production system. I am hoping that Open vSwitch provides this feature, allowing networks to be created without using a physical interface or vlan and not require a reboot.



proxmox1 -> proxmox 2/3 will be never supported.

But can't you used a dedicated vlan for theses internal network?

Well yes I could do that and I thought that was a stupid question after I asked it, but if I had not asked it, I would not have gotten the response below about vxlan and gre which may exactly be what I want.


Alternatively, it can be done with openvswitch and vxlan. Maybe linux bridge with vxlan too (kernel 3.10), but I'm not sure.
With openvswitch, each host can have a GRE tunnel between each openvswitch.

This is interesting. Why burn a real vlan for internal only networks if there is an option like this? Hopefully, the performance is good.


dynamic vlan (without reboot) are already implemented in current proxmox. (kvm only), just setup the vlan tag in guest network interface.
Only bridge create/delete through proxmox gui need reboot. (but you can do it manually with command line and /etc/network/interfaces without reboot)

Since you stated KVM only, I assume you mean that this works at the VM level, but that is not a real workable solution for an enterprise providing VMs to less technically knowledgeable clients and it requires more work as every VM needs additional network settings. It's a solution that requires people start out with work arounds as few installers support setting vlans and few if any post installer network setup systems support this. You have to get on the vlan after the install. What if a client wants to install from their own network? Now they have to build in the ability to set a VLAN and reset that vlan with every reboot, something they would not do except for Proxmox. When if the install solution is PXE boot. Yes, some network PXE boot options do allow VLANs, but that is not a common use. People expect the network stack to do all the network magic so normal uninteresting stuff is done at the OS level.

If you did not mean VLAN at the OS level I need some help, as I do not see or understand how to create VLANs that any VM can use in the GUI. I assume that I would have to manually do this at the CLI, but when I do this does the Proxmox GUI see these cli network changes without a reboot and allow these networks to be selected?

This is one of the things that I hope Open vSwitch would solve. The only thing in the network tab would be the interface needed to access Proxmox and then vSwitch to add vlans to that interface and make use of other interfaces, vlans or not, with out requiring reboots.

Is this true today?

If not, will it be true in the future?

Finally, if I use the vxlan for internal only vlans, will I be able to create vlans that actually go over a switch without traffic for the real vlans traversing the vxlan gre network. To be more clear, this would make it so that server to server VM traffic for internal vlans would communicate through gre, but server to server VM traffic for real network interfaces and vlans would go through a switch.


we have talked about this on the dev mailing list, maybe for 2014

You above statement was about network interface permissions. This would be so awesome. In network appliances which Proxmox is in this context, you usually have a management interface and then everything else. So I have one dedicated not vlaned link to manage Proxmox and do not want traffic other than management traffic going over this interface. I then have another physical connection that is vlaned for that same network and others and that is what I want people to use. This makes it so there is nothing special about the management interface, so in a pinch it should not break and if the links used by VLANs are saturated, the management interface can be reached.

firewall feature should come really soon, proxmox 3.3 I think. But for bridge only currently. (not openvswitch yet, because you can't use iptables)

This seems promising, sweet.

Thanks again,

Rhongomiant
 
I just checked my pve-no-subscription installation for updates and was wondering why there are so many new packages. E.g. new QEMU 1.7 seems to be already in pve-no-subscription repo. I thought the new packages were only available if pvetest repo is available!?

Code:
# apt-cache policy pve-qemu-kvm
pve-qemu-kvm:
  Installed: 1.4-17
  Candidate: 1.7-4
  Version table:
     1.7-4 0
        500 [URL]http://download.proxmox.com/debian/[/URL] wheezy/pve-no-subscription amd64 Packages
 *** 1.4-17 0
        500 [URL]http://download.proxmox.com/debian/[/URL] wheezy/pve-no-subscription amd64 Packages
        100 /var/lib/dpkg/status

Yep, did not do my homework (pay attention!!)... now I have 4 hosts giving me fits I have to post versions and results later today.

I think I'm finally in a somewhat stable config but the cluster is not happy yet..
 
We uploaded a lot of packages for beta testing, containing new features, small improvements and countless bug fixes.

Here are the highlights

  • Proxmox VE Ceph Server
  • Spiceterm: SPICE as full replacement for Java based console (VM, shell and OpenVZ console). For now, Java console is still the default
  • QEMU 1.7, including major update of the VM backup code
  • Open vSwitch
  • New 3.10 Kernel (based on RHEL7, for now without OpenVZ support)
  • Latest 2.6.32 kernel, updated drivers
A big Thank-you to our active community for all feedback, testing, bug reporting and patch submissions. For complete release notes see the change logs of each package.

Package repositories
http://pve.proxmox.com/wiki/Package_repositories

Everybody is encouraged to test and give feedback!
__________________
Best regards,

Martin Maurer
Proxmox VE project leader

Well I updated my 4 host starting Monday morning with mixed results

My hosts keeping sending out corosync messages....

Code:
Feb 19 12:42:44 proliant01 corosync[3353]:   [TOTEM ] A processor joined or left the membership and a new membership was formed.
Feb 19 12:42:44 proliant01 corosync[3353]:   [CPG   ] chosen downlist: sender r(0) ip(10.10.0.200) ; members(old:4 left:0)
Feb 19 12:42:44 proliant01 corosync[3353]:   [MAIN  ] Completed service synchronization, ready to provide service.
Feb 19 12:42:54 proliant01 corosync[3353]:   [TOTEM ] A processor failed, forming new configuration.
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] CLM CONFIGURATION CHANGE
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] New Configuration:
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] #011r(0) ip(10.10.0.200) 
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] #011r(0) ip(10.10.0.201) 
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] #011r(0) ip(10.10.0.202) 
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] #011r(0) ip(10.10.0.204) 
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] Members Left:
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] Members Joined:
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] CLM CONFIGURATION CHANGE
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] New Configuration:
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] #011r(0) ip(10.10.0.200) 
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] #011r(0) ip(10.10.0.201) 
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] #011r(0) ip(10.10.0.202) 
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] #011r(0) ip(10.10.0.204) 
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] Members Left:
Feb 19 12:42:54 proliant01 corosync[3353]:   [CLM   ] Members Joined:

Along with totem retransmit every 4 or 5 minutes

Code:
Feb 19 12:46:22 proliant01 corosync[3353]:   [TOTEM ] Retransmit List:


server 1

Code:
proxmox-ve-2.6.32: 3.1-121 (running kernel: 2.6.32-27-pve) pve-manager: 3.1-43 (running version: 3.1-43/1d4b0dfb) pve-kernel-2.6.32-20-pve: 2.6.32-100 pve-kernel-2.6.32-27-pve: 2.6.32-121 pve-kernel-2.6.32-19-pve: 2.6.32-96 pve-kernel-2.6.32-16-pve: 2.6.32-82 pve-kernel-2.6.32-25-pve: 2.6.32-113 pve-kernel-2.6.32-22-pve: 2.6.32-107 pve-kernel-2.6.32-26-pve: 2.6.32-114 pve-kernel-2.6.32-23-pve: 2.6.32-109 lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4 corosync-pve: 1.4.5-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.5-1 pve-cluster: 3.0-12 qemu-server: 3.1-15 pve-firmware: 1.1-2 libpve-common-perl: 3.0-13 libpve-access-control: 3.0-11 libpve-storage-perl: 3.0-19 pve-libspice-server1: 0.12.4-3 vncterm: 1.1-6 vzctl: 4.0-1pve4 vzprocps: 2.0.11-2 vzquota: 3.1-2 pve-qemu-kvm: 1.7-4 ksm-control-daemon: 1.1-1 glusterfs-client: 3.4.2-1

server 2

Code:
[FONT=monospace]proxmox-ve-2.6.32: 3.1-121 (running kernel: 2.6.32-27-pve) pve-manager: 3.1-43 (running version: 3.1-43/1d4b0dfb) pve-kernel-2.6.32-20-pve: 2.6.32-100 pve-kernel-2.6.32-27-pve: 2.6.32-121 pve-kernel-2.6.32-19-pve: 2.6.32-96 pve-kernel-2.6.32-16-pve: 2.6.32-82 pve-kernel-2.6.32-25-pve: 2.6.32-113 pve-kernel-2.6.32-22-pve: 2.6.32-107 pve-kernel-2.6.32-26-pve: 2.6.32-114 pve-kernel-2.6.32-23-pve: 2.6.32-109 lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4 corosync-pve: 1.4.5-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.5-1 pve-cluster: 3.0-12 qemu-server: 3.1-15 pve-firmware: 1.1-2 libpve-common-perl: 3.0-13 libpve-access-control: 3.0-11 libpve-storage-perl: 3.0-19 pve-libspice-server1: 0.12.4-3 vncterm: 1.1-6 vzctl: 4.0-1pve4 vzprocps: 2.0.11-2 vzquota: 3.1-2 pve-qemu-kvm: 1.7-4 ksm-control-daemon: 1.1-1 glusterfs-client: 3.4.2-1 [/FONT]

server 3

Code:
proxmox-ve-2.6.32: 3.1-121 (running kernel: 2.6.32-27-pve) pve-manager: 3.1-43 (running version: 3.1-43/1d4b0dfb) pve-kernel-2.6.32-20-pve: 2.6.32-100 pve-kernel-2.6.32-27-pve: 2.6.32-121 pve-kernel-2.6.32-19-pve: 2.6.32-96 pve-kernel-2.6.32-16-pve: 2.6.32-82 pve-kernel-2.6.32-25-pve: 2.6.32-113 pve-kernel-2.6.32-22-pve: 2.6.32-107 pve-kernel-2.6.32-26-pve: 2.6.32-114 pve-kernel-2.6.32-23-pve: 2.6.32-109 lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4 corosync-pve: 1.4.5-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.5-1 pve-cluster: 3.0-12 qemu-server: 3.1-15 pve-firmware: 1.1-2 libpve-common-perl: 3.0-13 libpve-access-control: 3.0-11 libpve-storage-perl: 3.0-19 pve-libspice-server1: 0.12.4-3 vncterm: 1.1-6 vzctl: 4.0-1pve4 vzprocps: 2.0.11-2 vzquota: 3.1-2 pve-qemu-kvm: 1.7-4 ksm-control-daemon: 1.1-1 glusterfs-client: 3.4.2-1


server 4

Code:
proxmox-ve-2.6.32: 3.1-121 (running kernel: 2.6.32-27-pve) pve-manager: 3.1-43 (running version: 3.1-43/1d4b0dfb) pve-kernel-2.6.32-20-pve: 2.6.32-100 pve-kernel-2.6.32-27-pve: 2.6.32-121 pve-kernel-2.6.32-19-pve: 2.6.32-96 pve-kernel-2.6.32-16-pve: 2.6.32-82 pve-kernel-2.6.32-25-pve: 2.6.32-113 pve-kernel-2.6.32-22-pve: 2.6.32-107 pve-kernel-2.6.32-26-pve: 2.6.32-114 pve-kernel-2.6.32-23-pve: 2.6.32-109 lvm2: 2.02.98-pve4 clvm: 2.02.98-pve4 corosync-pve: 1.4.5-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.5-1 pve-cluster: 3.0-12 qemu-server: 3.1-15 pve-firmware: 1.1-2 libpve-common-perl: 3.0-13 libpve-access-control: 3.0-11 libpve-storage-perl: 3.0-19 pve-libspice-server1: 0.12.4-3 vncterm: 1.1-6 vzctl: 4.0-1pve4 vzprocps: 2.0.11-2 vzquota: 3.1-2 pve-qemu-kvm: 1.7-4 ksm-control-daemon: 1.1-1 glusterfs-client: 3.4.2-1]

It had a few quirks before the upgrade, biggest one I seemed to deal with often was just guest time drift.

This mornings test with omping between the 4 servers showed all was well, but now it seems like the server 1 (proliant01) is lacking multicast replies..
 
"Everybody is encouraged to test and give feedback!
__________________
Best regards,

Martin Maurer
Proxmox VE project leader"




Sorry just thought I was inline with Martins request....
 
Thanks for responding spirit.


Well I guess I am not enough of an outside the box thinker to have thought to create a bridge without giving it a physical interface. However, the problem exists that every time I created one of these a reboot would be required. Not a good option on a production system. I am hoping that Open vSwitch provides this feature, allowing networks to be created without using a physical interface or vlan and not require a reboot.
We need to work on that, currently the pve gui, create a new /etc/network/interfaces.new and at reboot this replace /etc/network/interfaces.
If you have skill, you can create bridge (vmbXX) manually in /etc/network/interfaces , and just do an "ifup vmbrXX" to enable it, that's all




Since you stated KVM only, I assume you mean that this works at the VM level, but that is not a real workable solution for an enterprise providing VMs to less technically knowledgeable clients and it requires more work as every VM needs additional network settings. It's a solution that requires people start out with work arounds as few installers support setting vlans and few if any post installer network setup systems support this. You have to get on the vlan after the install. What if a client wants to install from their own network? Now they have to build in the ability to set a VLAN and reset that vlan with every reboot, something they would not do except for Proxmox. When if the install solution is PXE boot. Yes, some network PXE boot options do allow VLANs, but that is not a common use. People expect the network stack to do all the network magic so normal uninteresting stuff is done at the OS level.

If you did not mean VLAN at the OS level I need some help, as I do not see or understand how to create VLANs that any VM can use in the GUI. I assume that I would have to manually do this at the CLI, but when I do this does the Proxmox GUI see these cli network changes without a reboot and allow these networks to be selected?

This is one of the things that I hope Open vSwitch would solve. The only thing in the network tab would be the interface needed to access Proxmox and then vSwitch to add vlans to that interface and make use of other interfaces, vlans or not, with out requiring reboots.

Is this true today?


Currenty, with kvm vm, you can in gui setup a a vlan tag, on a vm network interface. This manage vlans on the host side. (not inside the guest).
And this don't need any reboot (host or guest). Can you also change vlan dynamicaly, without reboot the guest.
But maybe do you want some bridge with predefined vlans ? (so user can't choose the vlan).
For this,you can create a bridge with a tagged ethx.y interface in gui. (but then currently you need to reboot, or do an "ifup vmbrX" manually to enable the bridge.

Finally, if I use the vxlan for internal only vlans, will I be able to create vlans that actually go over a switch without traffic for the real vlans traversing the vxlan gre network. To be more clear, this would make it so that server to server VM traffic for internal vlans would communicate through gre, but server to server VM traffic for real network interfaces and vlans would go through a switch.

yes, make sense.

I'll try to implement that soon in gui.
For openvswitch is already possible to do it in /etc/network/interfaces

> auto vmbr1

> iface vmbr1 inet static
> address 10.168.1.55
> netmask 255.255.255.0
> ovs_type OVSBridge
> ovs_ports vx1
>
> allow-vmbr1 vx1
> iface vx1 inet manual
> ovs_type OVSTunnel
> ovs-bridge vmbr1
> ovs_tunnel_type vxlan
> ovs_tunnel_options options:remote_ip=192.168.1.56 options:key=flow options:dst_port=9999

for bridge, I need to add some patches.

I need to do more test, but I don't think it's difficult, so maybe for proxmox 3.3

[/QUOTE]
 
Why not use grsecurity - stable w/ vserver kernel patch instead of OpenVZ?

we use OpenVZ here, so a vserver kernel does not help - there is no plan to move to vserver.
 
Is it possible in Proxmox to limit bandwidth with Open vSwitch on a per VLAN (or bridge) basis?

E.g

bond0 - eth0 eth1
bond1 - eth0.5 eth1.5

vmbr0 - bond0
vmbr1 - bond1

But then have vmbr0 as a whole limited to 100Mbit and vmbr1 limited as a whole to 1Gbit?

Thanks
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!