Packet Loss

bigfishinnet

Member
Feb 2, 2010
47
0
6
Hi I have bad packet loss on Windows 2008 R2 VM's on 2 servers. I have other servers in the same school and I don't get packet loss with them. Both servers are configured the same but one with a E1000 Nic and the other a Virtio. If I ping the host address I get NO packet loss if I ping the 2008r2 servers I get between 8-50% packet loss. The end result is very bad lag and performance when using these servers as Remote Desktop (Terminal Server).

These servers are standalone. It appears the older version of Proxmox (2.3-13) does not give me a problem. This is causing a bit of an issue as we use Thinstation to connect to these severs and the performance is bad.

Can anyone help?

Thanks Stephen

Details below of one server I am having issues with.
pveperf
CPU BOGOMIPS: 22667.80
REGEX/SECOND: 1088023
HD SIZE: 16.73 GB (/dev/mapper/pve-root)
BUFFERED READS: 79.78 MB/sec
AVERAGE SEEK TIME: 5.13 ms
FSYNCS/SECOND: 2569.13
DNS EXT: 1769.72 ms
DNS INT: 0.69 ms (springfield.lan)

pveversion -v
proxmox-ve-2.6.32: 3.4-163 (running kernel: 2.6.32-41-pve)
pve-manager: 3.4-11 (running version: 3.4-11/6502936f)
pve-kernel-2.6.32-32-pve: 2.6.32-136
pve-kernel-2.6.32-41-pve: 2.6.32-163
pve-kernel-2.6.32-37-pve: 2.6.32-150
lvm2: 2.02.98-pve4
clvm: 2.02.98-pve4
corosync-pve: 1.4.7-1
openais-pve: 1.1.4-3
libqb0: 0.11.1-2
redhat-cluster-pve: 3.2.0-2
resource-agents-pve: 3.9.2-4
fence-agents-pve: 4.0.10-3
pve-cluster: 3.0-19
qemu-server: 3.4-6
pve-firmware: 1.1-4
libpve-common-perl: 3.0-24
libpve-access-control: 3.0-16
libpve-storage-perl: 3.0-33
pve-libspice-server1: 0.12.4-3
vncterm: 1.1-8
vzctl: 4.0-1pve6
vzprocps: 2.0.11-2
vzquota: 3.1-2
pve-qemu-kvm: 2.2-11
ksm-control-daemon: 1.1-1
glusterfs-client: 3.5.2-1


Details below of server with similar hardware and there are NO Problems

CPU BOGOMIPS: 36256.40
REGEX/SECOND: 942058
HD SIZE: 16.73 GB (/dev/mapper/pve-root)
BUFFERED READS: 140.93 MB/sec
AVERAGE SEEK TIME: 4.06 ms
FSYNCS/SECOND: 2657.57
DNS EXT: 1082.89 ms
DNS INT: 1.17 ms (springfield.lan)


root@proxmox:~# pveversion -v
pve-manager: 2.3-13 (pve-manager/2.3/7946f1f1)
running kernel: 2.6.32-19-pve
proxmox-ve-2.6.32: 2.3-96
pve-kernel-2.6.32-11-pve: 2.6.32-66
pve-kernel-2.6.32-19-pve: 2.6.32-96
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-4
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-36
qemu-server: 2.3-20
pve-firmware: 1.0-21
libpve-common-perl: 1.0-49
libpve-access-control: 1.0-26
libpve-storage-perl: 2.3-7
vncterm: 1.0-4
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.4-10
ksm-control-daemon: 1.1-1
 
Hi I have bad packet loss on Windows 2008 R2 VM's on 2 servers. I have other servers in the same school and I don't get packet loss with them. Both servers are configured the same but one with a E1000 Nic and the other a Virtio. If I ping the host address I get NO packet loss if I ping the 2008r2 servers I get between 8-50% packet loss. The end result is very bad lag and performance when using these servers as Remote Desktop (Terminal Server).

Here is the network config for the server I am having issues with...

root@proxmox2:~# cat /etc/network/interfaces
# network interface settings
auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth1
iface eth1 inet manual

auto vmbr0
iface vmbr0 inet static
address 10.0.10.15
netmask 255.255.0.0
gateway 10.0.10.10
bridge_ports eth0
bridge_stp off
bridge_fd 0

auto vmbr1
iface vmbr1 inet static
address 10.0.10.16
netmask 255.255.0.0
bridge_ports eth1
bridge_stp off
bridge_fd 0
 
Can you provide the outcome of this aswell?

ethtool eth0 and ethtool eth1

I recently found a issue on one of our nodes running half duplex instead of full.

So better to check :)
 
Can you provide the outcome of this aswell?

ethtool eth0 and ethtool eth1

I recently found a issue on one of our nodes running half duplex instead of full.

So better to check :)

Hi - thanks for your reply. Listed below are the ethtool output for eth0 and eth1 for the server where I am experiencing packet loss on the Windows VM's. At the very bottom is the ethtool output for the older proxmox server for which I am having no issues.

Recently the 1 GB switch they were connected to stopped working so they are running at 100MB. In terms of the ethtool outputs it all looks normal - I think?

Thanks

Stephen

eth0
Settings for eth0:

Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: g
Wake-on: g
Link detected: yes


eth1
Settings for eth1:

Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supported pause frame use: No
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: No
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: Unknown
Supports Wake-on: g
Wake-on: g
Link detected: yes


ethtool output for the server which does not drop any packets when pinging VM.

ethtool eth5
Settings for eth5:
Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised pause frame use: Symmetric
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Full
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
MDI-X: on
Supports Wake-on: pumbg
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: yes
 
Just an update. I tried Proxmox 4 and I still have packet loss issues. If login to Proxmox host and ping VM from here then II STILL have packet loss issues. Are we talking a hardware problem here? Can anyone advise? Thanks for listening and your help so far.

Stephen
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!