PXE/TFTP very slow when couple of KVM VM's are running

Freemind

Member
Feb 2, 2013
8
0
21
Hello,

we use a 2-node proxmox setup with a couple of KVM guests. After putting some Guests on node1(Realtek NIC), we noticed a high performance drop when booting from the network with TFTP transfer of images. It takes about 20Minutes to get an 160MB Image over TFTP. Without the issue it takes just seconds in the Gigabit environment.

I first thought its maybe the NIC or something, so we tried node2, which has different NIC(Intel E1000) - but its the same thing there. I also noticed TX-overruns on the guest-interfaces:

tap112i0 Link encap:Ethernet HWaddr 02:89:d4:89:56:7d
inet6 addr: fe80::89:d4ff:fe89:567d/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:3741134 errors:0 dropped:0 overruns:0 frame:0
TX packets:130830327 errors:0 dropped:0 overruns:3771 carrier:0
collisions:0 txqueuelen:500
RX bytes:339090840 (323.3 MiB) TX bytes:41335853211 (38.4 GiB)


tap113i0 Link encap:Ethernet HWaddr d2:95:5d:09:a6:60
inet6 addr: fe80::d095:5dff:fe09:a660/64 Scope:Link
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:3253275 errors:0 dropped:0 overruns:0 frame:0
TX packets:129968189 errors:0 dropped:0 overruns:4516 carrier:0
collisions:0 txqueuelen:500
RX bytes:3423777422 (3.1 GiB) TX bytes:41036954761 (38.2 GiB)

I looked at the packetflow between the TFTP/PXE Server and the guest, there is nothing what looks like an error, no delays or something else.
Some infos about the nodes:

node1:
2.6.32-16-pve

node2:
2.6.32-17-pve

only the kernelversion differs, rest of software is identical on both nodes:
dpkg -l | grep pve
ii clvm 2.02.95-1pve2 Cluster LVM Daemon for lvm2
ii corosync-pve 1.4.4-1 Standards-based cluster framework (daemon and modules)
ii dmsetup 2:1.02.74-1pve2 Linux Kernel Device Mapper userspace library
ii fence-agents-pve 3.1.9-1 fence agents for redhat cluster suite
ii libcorosync4-pve 1.4.4-1 Standards-based cluster framework (libraries)
ii libdevmapper1.02.1 2:1.02.74-1pve2 Linux Kernel Device Mapper userspace library
ii libopenais3-pve 1.1.4-2 Standards-based cluster framework (libraries)
ii libpve-access-control 1.0-25 Proxmox VE access control library
ii libpve-common-perl 1.0-41 Proxmox VE base library
ii libpve-storage-perl 2.0-36 Proxmox VE storage management library
ii lvm2 2.02.95-1pve2 Linux Logical Volume Manager
ii openais-pve 1.1.4-2 Standards-based cluster framework (daemon and modules)
ii pve-cluster 1.0-34 Cluster Infrastructure for Proxmox Virtual Environment
ii pve-firmware 1.0-21 Binary firmware code for the pve-kernel
ii pve-kernel-2.6.32-17-pve 2.6.32-83 The Proxmox PVE Kernel Image
ii pve-manager 2.2-32 The Proxmox Virtual Environment
ii pve-qemu-kvm 1.3-10 Full virtualization on x86 hardware
ii redhat-cluster-pve 3.1.93-2 Red Hat cluster suite
ii resource-agents-pve 3.9.2-3 resource agents for redhat cluster suite
ii vzctl 4.0-1pve2 OpenVZ - server virtualization solution - control tools



Hope you got some hints, thank you!

best regards,
Freemind
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!