After updating, errors on e1000 virtual NICs

markwhi

New Member
Feb 19, 2016
3
0
1
45
After updating proxmox-ve today I am seeing a very large error rate on e1000 virtual interfaces in qemu VMs. Switching to VirtIO seems to resolve the errors but is not a long-term solution. Did something change recently with e1000 support in qemu?

markc@rproxy-01:~$ dmesg|grep eth0
[ 1.159497] e1000 0000:00:12.0: eth0: (PCI:33MHz:32-bit) 32:5f:13:ca:3f:92
[ 1.159506] e1000 0000:00:12.0: eth0: Intel(R) PRO/1000 Network Connection

markc@rproxy-01:~$ netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 16228479 1028559 0 0 15681363 0 0 0 BMRU
lo 16436 0 2210 0 0 0 2210 0 0 0 LRU
 
Version info:

# pveversion --verbose
proxmox-ve: 4.1-34 (running kernel: 4.2.6-1-pve)
pve-manager: 4.1-13 (running version: 4.1-13/cfb599fb)
pve-kernel-2.6.32-37-pve: 2.6.32-150
pve-kernel-4.2.6-1-pve: 4.2.6-36
pve-kernel-2.6.32-43-pve: 2.6.32-166
pve-kernel-2.6.32-34-pve: 2.6.32-140
pve-kernel-2.6.32-26-pve: 2.6.32-114
lvm2: 2.02.116-pve2
corosync-pve: 2.3.5-2
libqb0: 1.0-1
pve-cluster: 4.0-32
qemu-server: 4.0-55
pve-firmware: 1.1-7
libpve-common-perl: 4.0-48
libpve-access-control: 4.0-11
libpve-storage-perl: 4.0-40
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.5-5
pve-container: 1.0-44
pve-firewall: 2.0-17
pve-ha-manager: 1.0-21
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.5-7
lxcfs: 0.13-pve3
cgmanager: 0.39-pve1
criu: 1.6.0-1
fence-agents-pve: not correctly installed
openvswitch-switch: 2.3.2-2
 
Hi,
Why is this virtio no long term solution?
It is para-visualized so you should prefer virtio and not e1000.
What Os do you use on the VM?
 
After investigating it looks like virtio will probably work. I had concerns about some of my BSD guests.

Still, these errors are new with the upgrade and I'd like to understand what's caused them and how to resolve them.

Thanks.
 
I noticed the same kind of problem using e1000 NIC (on a KVM vm) : I had about 1/3 of my packets with error (in Proxmox 4), and I did not have this kind of problem with Proxmox 3.4.
Using Virtio NIC instead was a solution for me, but there is apparently a problem with the e1000 driver (which is default NIC proposed by Proxmox when creating a new VM).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!