Via 'man vzctl', for veth, you should end up with something like:
vzctl set <VMID> --netif_add eth<N>,,,,vmbr<M> --save
for adding an interface eth<N> in the container <VMID> to bridge vmbr<M> in the host.
To remove eth<N> from container <VMID> trigger:
vzctl set <VMID> --netif_del...
After upgrading to PVE 1.5, kernel 2.6.18 I got a problem with containers using bridged ethernet network. Here is an example from starting from the command line:
Configure veth devices: veth219.0
Adding interface veth219.0 to bridge vmbr2 on CT0 for CT219
I have a cluster of two servers with the same disk-layout, but PERC5/i instead. These servers don't seem as responsive as the ones with PERC6/i, when under no load at all, but that's just a hunch. Currently the PERC5s are under very heavy load so comparing results makes no sense.
As commented in this thread, it is a little difficult to get clean results from servers that are in use. Anyway, from a PERC6/i with 4x 300GB SAS 15k in RAID 10 I got:
CPU BOGOMIPS: 39903.65
HD SIZE: 94.49 GB (/dev/pve/root)...
Don't know if anyone else have seen it, but I couldn't install Proxmox VE 1.1 using a perc6i controller before turning off the write-back cache in the RAID controller. I have three 300GB 15k SAS disks connected to the perc6i. RAID 5 or 1 produced the same result, install getting very slow...