packet loss - Ubuntu 12.04 guest

svacaroaia

Member
Oct 4, 2012
36
0
6
Hi,

I have noticed up to 10% packet loss for Ubuntu 12.04 LTS guests with virtio network adapter
Ubuntu 10.04 guest doesn't seem affected

It seems there is a known bug :
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/997978

is anyone else experiencing this issue ?
Are there any solutions / workarounds ?

Here are some technical details

I am using a bonding interface :

auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode 4
bond_lacp_rate 1
bond_xmit_hash_policy 1


auto bond0.2
iface bond0.2 inet manual
vlan-raw-device bond0


auto vmbr1
iface vmbr1 inet manual
bridge_ports bond0.2
bridge_stp off
bridge_fd 0

Thanks
Steven
 
you already run latest proxmox ve 2.2 and latest kernel inside your ubuntu? if not, upgrade both.
 
Hi Tom,
Thanks for your prompt answer
I am running the latest on host ...

pveversion -v
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
proxmox-ve-2.6.32: 2.1-74
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-14-pve: 2.6.32-74
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-49
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-31
vncterm: 1.0-3
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.1-8
ksm-control-daemon: 1.1-1


....and on guest
Linux 3.2.0-29-generic
 
Hi Tom,
Thanks for your prompt answer
I am running the latest on host ...

pveversion -v
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-14-pve
...

no, upgrade to latest 2.2 from today.
 
Hi,

I have noticed up to 10% packet loss for Ubuntu 12.04 LTS guests with virtio network adapter
Ubuntu 10.04 guest doesn't seem affected

It seems there is a known bug :
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm/+bug/997978

is anyone else experiencing this issue ?
Are there any solutions / workarounds ?

Here are some technical details

I am using a bonding interface :

auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode 4
bond_lacp_rate 1
bond_xmit_hash_policy 1


auto bond0.2
iface bond0.2 inet manual
vlan-raw-device bond0


auto vmbr1
iface vmbr1 inet manual
bridge_ports bond0.2
bridge_stp off
bridge_fd 0

Thanks
Steven

The patches from bug from ubuntu
- 9001-virtio-add-missing-mb-on-notification.patch
- 9002-virtio-add-missing-mb-on-enable-notification.patch
- 9003-virtio-order-index-descriptor-reads.patch
are already include in qemu-kvm 1.2




 
I've had the same issue for over a year and hoped the updates I did yesterday would have included a fix but no such luck. I also noticed it's only with a static IP on my guest. If I leave the guest as DHCP there are zero dropped packets. Seams very strange to me and I must have a set IP. Dropped packets don't seam to be a problem for local network traffic but I have routed HTTP/HTTPS traffic from the Internet that is having trouble connecting or staying connected. Not sure what the situation is since some clients have no issues and some are pronounced. Guest reboot (or network restart on guest) clears the issue for a bit but eventually client issues start happening again.

Our DHCP is being provided by a Cisco router so I'm wondering if their is something going on between Cisco and the Ubuntu 12 guest where DHCP is configuring something on one end or the other that is not being configured when I give the guest a static address.

Any thoughts?
Rois Cannon
 
Some notes:

vlan-raw-device is not needed anymore, it is a Debian Squeeze thing.
bond_xmit_hash_policy 1 could be the cause to your problem since this value does not exists:
(http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding#bonding driver options)
xmit_hash_policy Selects the transmit hash policy to use for slave selection in balance-xor and 802.3ad modes. Possible values are:

  • layer2 Uses XOR of hardware MAC addresses to generate the hash. The formula is
8f6eed397d6bee56f08b1fe20aadfee6.png

This algorithm will place all traffic to a particular network peer on the same slave.
This algorithm is 802.3ad compliant.

  • layer3+4 This policy uses upper layer protocol information, when available, to generate the hash. This allows for traffic to a particular network peer to span multiple slaves, although a single connection will not span multiple slaves.


This policy is intended to mimic the behavior of certain switches, notably Cisco switches with PFC2 as well as some Foundry and IBM products.
IMHO bridged networking in proxmox and bond mode 4 (802.3ad) needs xmit_hash policy layer3+4 in corner cases (advanced vlan etc)

Summing all up I would rewrite your network config this way:
Code:
auto bond0
iface bond0 inet manual
slaves eth0 eth1
bond_miimon 100
bond_mode 4
bond_lacp_rate 1
bond_xmit_hash_policy layer3+4

auto vmbr1
iface vmbr1 inet manual
bridge_ports bond0.2
bridge_stp off
bridge_fd 0
 
Thanks for taking a look. I should have added a little more info on my particular host interfaces configuration file. I'm not using bonding like svacaroaia is. Only a bridge.
Code:
auto lo
iface lo inet loopback

iface eth0 inet manual

auto vmbr0
iface vmbr0 inet static
address 192.168.5.10
netmask 255.255.255.0
gateway 192.168.5.1
bridge_ports eth0
bridge_stp off
bridge_fd 0

Any thoughts on why guest DHCP would not be dropping while a static is in this situation?
Thanks
Rois
 
Looks like "2x Intel® 82576 Dual-Port Gigabit Ethernet Controllers (4 ports)" from this SuperMicro link.

http://www.supermicro.com/products/system/1U/6016/SYS-6016T-URF4_.cfm?UIO=N

dmesg for igb driver
Code:
root@vhost3:/proc# dmesg | grep igb
igb 0000:05:00.0: power state changed by ACPI to D0
igb 0000:05:00.0: power state changed by ACPI to D0
igb 0000:05:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
igb 0000:05:00.0: setting latency timer to 64
igb 0000:05:00.0: irq 55 for MSI/MSI-X
igb 0000:05:00.0: irq 56 for MSI/MSI-X
igb 0000:05:00.0: Intel(R) Gigabit Ethernet Network Connection
igb 0000:05:00.0: eth0: (PCIe:2.5GT/s:Width x4)
igb 0000:05:00.0: eth0: MAC: 00:25:90:54:d2:fc
igb 0000:05:00.0: eth0: PBA No: FFFFFF-0FF
igb 0000:05:00.0: LRO is disabled
igb 0000:05:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
igb 0000:05:00.1: power state changed by ACPI to D0
igb 0000:05:00.1: power state changed by ACPI to D0
igb 0000:05:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
igb 0000:05:00.1: setting latency timer to 64
igb 0000:05:00.1: irq 57 for MSI/MSI-X
igb 0000:05:00.1: irq 58 for MSI/MSI-X
igb 0000:05:00.1: Intel(R) Gigabit Ethernet Network Connection
igb 0000:05:00.1: eth1: (PCIe:2.5GT/s:Width x4)
igb 0000:05:00.1: eth1: MAC: 00:25:90:54:d2:fd
igb 0000:05:00.1: eth1: PBA No: FFFFFF-0FF
igb 0000:05:00.1: LRO is disabled
igb 0000:05:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
igb 0000:06:00.0: power state changed by ACPI to D0
igb 0000:06:00.0: power state changed by ACPI to D0
igb 0000:06:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
igb 0000:06:00.0: setting latency timer to 64
igb 0000:06:00.0: irq 59 for MSI/MSI-X
igb 0000:06:00.0: irq 60 for MSI/MSI-X
igb 0000:06:00.0: Intel(R) Gigabit Ethernet Network Connection
igb 0000:06:00.0: eth2: (PCIe:2.5GT/s:Width x2)
igb 0000:06:00.0: eth2: MAC: 00:25:90:54:d2:fe
igb 0000:06:00.0: eth2: PBA No: FFFFFF-0FF
igb 0000:06:00.0: LRO is disabled
igb 0000:06:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
igb 0000:06:00.1: power state changed by ACPI to D0
igb 0000:06:00.1: power state changed by ACPI to D0
igb 0000:06:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
igb 0000:06:00.1: setting latency timer to 64
igb 0000:06:00.1: irq 61 for MSI/MSI-X
igb 0000:06:00.1: irq 62 for MSI/MSI-X
igb 0000:06:00.1: Intel(R) Gigabit Ethernet Network Connection
igb 0000:06:00.1: eth3: (PCIe:2.5GT/s:Width x2)
igb 0000:06:00.1: eth3: MAC: 00:25:90:54:d2:ff
igb 0000:06:00.1: eth3: PBA No: FFFFFF-0FF
igb 0000:06:00.1: LRO is disabled
igb 0000:06:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
igb 0000:05:00.0: DCA enabled
igb 0000:05:00.1: DCA enabled
igb 0000:06:00.0: DCA enabled
igb 0000:06:00.1: DCA enabled
igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!