Hi,
I can configure a network port attached to a VM to be a VLAN trunk using the following commands (when using Open vSwitch):
ovs-vsctl set port tap103i1 trunks=10,20,30
ovs-vsctl set port tap103i1 vlan_mode=trunk
How can I do this on the webinterface?
Cheers,
Tobias
My IBM server is also not the newest model, it's 3 years old. The processor is a Intel Quad-Core Intel Xeon E5320 1.86 GHz (8MB L2 cache) and I have 2 250GB SATA Disks (Mirror). So perhaps I have to upgrade my hardware for more perfomance =(
I have exactly the same troubles. A VM with 2 vCPUs and virtio-net has an extremely bad network performance:
root@server1:~# iperf -i 10 -m -t 120 -c server2.tobru.ch
------------------------------------------------------------
Client connecting to james.tobru.ch, TCP port 5001
TCP window size...
I don't see a reason to migrate PVE to Ubuntu.
PVE is a server "tool" and Debian is the best distribution for server installations. The reason for my little experiment with Ubuntu was to have a nice and updatet desktop around the nice proxmox tools. But a server doesn't need a desktop, or at...
I agree with tom, I only did that for fun and don't run any production VMs on this host.
There were a few issues with the package installation, but they could be resolved without troubles. And you have to manually create the bridge and the storage.
Hi,
Finally I added a DualPort Intel NIC to the AMD System.
03:00.0 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
03:00.1 Ethernet controller: Intel Corporation 82571EB Gigabit Ethernet Controller (rev 06)
Here are the iperf results:
KVM on Host1 ->...
Hi,
I added the following card to my AMD system:
Broadcom Corporation NetXtreme II BCM5709 Gigabit Ethernet (rev 20)
Here are some iperf tests
KVM on Host1 -> Host1
[ 3] local 10.0.0.26 port 43720 connected with 10.0.0.10 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0...
Not really. I don't expect native network performance, around 10% less performance would be ok.
I did some further test and startet some iperf tests on my two different hosts (see post above). This is what I get:
virtio
KVM on Host2 -> Host2
[ 3] local 10.0.0.26 port 56369 connected with...
Here are some hardware details
host1:~# cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 15
model : 107
model name : AMD Athlon(tm) 64 X2 Dual Core Processor 5000+
stepping : 2
cpu MHz : 2600.000
cache size : 512 KB...
Hi,
The KVM Guests have poor network performance. I tried several things, like changing the adapter type...
Here are some iperf tests:
Host1 to Host2: 942Mbit/s
Host1 to OpenVZ Guest on Host2: 941Mbit/s
KVM Guest (virtio) on Host1 to Host2: 300 - 442Mbit/s
KVM Guest (e1000) on Host1 to Host2...
I can't create any VM's on this node anymore. So I can't reproduce this error... But if I remember correct, I did the following steps:
Created new VM (KVM):
Disk Storage: LVM (local2)
Disk Type: virtio
Installed Ubuntu Server 9.10 inside this VM
Added 2nd harddisk:
Disk Storage: LVM (local2)...
Yes, that's right, I have a problem with the LVM =)
The missing Volume Group "quimby" is on the 2nd drive of VM 21. And the 2nd drive of VM 21 is the Logical Volume /dev/local2/vm-121-disk-2 which is on the host of this VM. So I don't understand is why the host has a problem with this VG, which...
Hi,
I created a LVM Volume in a KVM VM (don't ask me why!). Now I have some troubles with this vm, when I try to start it, the following messages tell me that it can't find the VG created in this VM (ID 121, name quimby:
server:~# qm start 121
Couldn't find device with uuid...
Oh yes, sorry! I was confused because I only saw the size of the master volume. Would be great if this can be improved... Thanks!
So finally I created a "fake" volume on the master server which has the same name as the volume group of the 2nd HD in the slave server. I added this storage as LVM...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.