Hi,
I'm currently facing some strange network latency issues on 2 of my Proxmox hosts. My current setup:
Cluster1: 5 hosts, latest stable PVE
Cluster2: 2 hosts, 1 latest stable, 1 latest PVEtest (upgraded to test repo to see if the problem went away, no luck)
On these clusters I have, among others, 8 identical virtual machines, of which 3 are running on Cluster2. On these 3 VM's I am experiencing high latency and weird ping times while on the other 5 vms there is no problem. The VM's are running latest updates of debian squeeze.
Output of pveversion -v of 1 of the affected hosts:
Config of 1 of the affected VM's:
Latency on affected host: (seems OK)
Latency on affected VM: (not OK)
The strange thing is, the very low ping times are really not possible.
Same ping on another VM, other host (seems OK)
network on bad host:
Any help in resolving this would be greatly appreciated!
Kind regards,
Koen
I'm currently facing some strange network latency issues on 2 of my Proxmox hosts. My current setup:
Cluster1: 5 hosts, latest stable PVE
Cluster2: 2 hosts, 1 latest stable, 1 latest PVEtest (upgraded to test repo to see if the problem went away, no luck)
On these clusters I have, among others, 8 identical virtual machines, of which 3 are running on Cluster2. On these 3 VM's I am experiencing high latency and weird ping times while on the other 5 vms there is no problem. The VM's are running latest updates of debian squeeze.
Output of pveversion -v of 1 of the affected hosts:
Code:
root@node9:~# pveversion -v
pve-manager: 2.3-7 (pve-manager/2.3/1fe64d18)
running kernel: 2.6.32-18-pve
proxmox-ve-2.6.32: 2.3-88
pve-kernel-2.6.32-16-pve: 2.6.32-82
pve-kernel-2.6.32-18-pve: 2.6.32-88
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-8
pve-firmware: 1.0-21
libpve-common-perl: 1.0-44
libpve-access-control: 1.0-25
libpve-storage-perl: 2.3-2
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.3-18
ksm-control-daemon: 1.1-1
Config of 1 of the affected VM's:
Code:
root@node9:~# cat /etc/pve/qemu-server/311.conf
bootdisk: virtio0
cores: 3
ide2: none,media=cdrom
memory: 4096
name: br-app8
net0: virtio=36:8D:23:4F:51:33,bridge=vmbr451
ostype: l26
sockets: 2
virtio0: mainvol00:311/vm-311-disk-1.qcow2,cache=writethrough,size=32G
Latency on affected host: (seems OK)
Code:
root@node9:~# ping belnet.be
PING belnet.be (193.190.130.15) 56(84) bytes of data.
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=1 ttl=55 time=4.41 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=2 ttl=55 time=4.53 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=3 ttl=55 time=4.52 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=4 ttl=55 time=4.44 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=5 ttl=55 time=4.31 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=6 ttl=55 time=4.57 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=7 ttl=55 time=4.53 ms
Latency on affected VM: (not OK)
Code:
root@br-app8:~# ping belnet.be
PING belnet.be (193.190.130.15) 56(84) bytes of data.
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=1 ttl=54 time=0.751 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=2 ttl=54 time=9.63 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=3 ttl=54 time=0.035 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=4 ttl=54 time=4.99 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=5 ttl=54 time=12.4 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=6 ttl=54 time=0.030 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=7 ttl=54 time=10.7 ms
The strange thing is, the very low ping times are really not possible.
Same ping on another VM, other host (seems OK)
Code:
root@br-app2:~# ping belnet.be
PING belnet.be (193.190.130.15) 56(84) bytes of data.
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=1 ttl=54 time=5.05 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=2 ttl=54 time=5.09 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=3 ttl=54 time=4.91 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=4 ttl=54 time=4.90 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=5 ttl=54 time=4.91 ms
64 bytes from fiorano.belnet.be (193.190.130.15): icmp_req=6 ttl=54 time=4.90 ms
network on bad host:
Code:
root@node9:~# ifconfig vmbr451
vmbr451 Link encap:Ethernet HWaddr 00:25:90:91:03:48
inet addr:127.45.1.9 Bcast:127.45.1.255 Mask:255.255.255.0
inet6 addr: fe80::225:90ff:fe91:348/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:1819 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:89148 (87.0 KiB) TX bytes:468 (468.0 B)
root@node9:~# ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:25:90:91:03:48
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:1353542 errors:0 dropped:0 overruns:0 frame:0
TX packets:1080980 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1626619667 (1.5 GiB) TX bytes:704216510 (671.5 MiB)
Any help in resolving this would be greatly appreciated!
Kind regards,
Koen