** ignore * solved

RobFantini

Famous Member
May 24, 2012
2,023
107
133
Boston,Mass
the issue we were having was due to faulty vlan configuration , not kvm or pve .

Hello

we have a network issue that may be a kvm issue.

on 3 pve systems, ping within a kvm looses packets.

on openvz, pve hosts and regular hardware the same ping tests does not loose any packets.

has anyone else seen this?

Code:
fbc241  ~ # pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-93
pve-kernel-2.6.32-17-pve: 2.6.32-83
pve-kernel-2.6.32-19-pve: 2.6.32-93
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-18
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-6
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-8
ksm-control-daemon: 1.1-1

We recently started using vlan on a layer 3 switch. routing between clans is done on that switch. we could be doing something wrong on with vlan tagging..

The pve hosts and openvz's do not use a vmbr . The kvm's do.

here is an example test result:
Code:
 ~ # ping -c 10 10.1.10.1 
PING 10.1.10.1 (10.1.10.1) 56(84) bytes of data.
64 bytes from 10.1.10.1: icmp_req=1 ttl=64 time=171 ms
64 bytes from 10.1.10.1: icmp_req=2 ttl=64 time=108 ms
64 bytes from 10.1.10.1: icmp_req=3 ttl=64 time=183 ms
64 bytes from 10.1.10.1: icmp_req=4 ttl=64 time=112 ms
64 bytes from 10.1.10.1: icmp_req=5 ttl=64 time=187 ms
64 bytes from 10.1.10.1: icmp_req=6 ttl=64 time=92.5 ms
64 bytes from 10.1.10.1: icmp_req=7 ttl=64 time=191 ms
64 bytes from 10.1.10.1: icmp_req=8 ttl=64 time=148 ms


--- 10.1.10.1 ping statistics ---
10 packets transmitted, 8 received, 20% packet loss, time 9019ms
rtt min/avg/max/mdev = 92.543/149.645/191.666/37.447 ms
www  ~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:46:4b:4a:be:2a brd ff:ff:ff:ff:ff:ff
    inet 10.1.10.50/24 brd 10.1.10.255 scope global eth0
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!