10GB Network in VM

sverri

Active Member
Nov 2, 2017
10
0
41
Zeesen
Hallo

ich habe ein Problem mit der Netzwerkgeschwindigkeit. Hardwareseitig habe ich eine Intel X520 10GB Karte verbaut, die auch unter Porxmox läuft!
Iperf gib mir hier ca 8,9GB an.

Wenn ich eine Brigde anlege und die an eine VM anbinde bekomme ich nur werte von 255Mbit raus.
Unter Debian genauso wie unter Win10 Treiber sind alle Installiert.

Hat da einer eine Idee für mich?? Habe schon alles was ich hier finden konnte Durchprobiert. :confused:

MfG Franko
 
Hi

Aktuelle Treiber sind in VM Instaliert.

Proxmox Hardware

dmesg | grep -i ens3f1
[ 4.502279] ixgbe 0000:01:00.1 ens3f1: renamed from eth3
[ 15.104778] vmbr3: port 1(ens3f1) entered blocking state
[ 15.104779] vmbr3: port 1(ens3f1) entered disabled state
[ 15.104834] device ens3f1 entered promiscuous mode
[ 15.234340] ixgbe 0000:01:00.1: registered PHC device on ens3f1
[ 15.275131] 8021q: adding VLAN 0 to HW filter on device ens3f1
[ 19.970117] ixgbe 0000:01:00.1 ens3f1: NIC Link is Up 10 Gbps, Flow Control: RX/TX
[ 19.970262] vmbr3: port 1(ens3f1) entered blocking state
[ 19.970264] vmbr3: port 1(ens3f1) entered forwarding state

# modinfo ixgbe
filename: /lib/modules/5.3.18-2-pve/updates/drivers/net/ethernet/intel/ixgbe/ixgbe.ko
version: 5.6.5
license: GPL
description: Intel(R) 10GbE PCI Express Linux Network Driver
author: Intel Corporation, <linux.nics@intel.com>
srcversion: B095A263FEC677D5909C376
alias: pci:v00008086d000015E5sv*sd*bc*sc*i*
alias: pci:v00008086d000015E4sv*sd*bc*sc*i*
alias: pci:v00008086d000015CEsv*sd*bc*sc*i*
alias: pci:v00008086d000015CCsv*sd*bc*sc*i*
alias: pci:v00008086d000015CAsv*sd*bc*sc*i*
alias: pci:v00008086d000015C8sv*sd*bc*sc*i*
alias: pci:v00008086d000015C7sv*sd*bc*sc*i*
alias: pci:v00008086d000015C6sv*sd*bc*sc*i*
alias: pci:v00008086d000015C4sv*sd*bc*sc*i*
alias: pci:v00008086d000015C3sv*sd*bc*sc*i*
alias: pci:v00008086d000015C2sv*sd*bc*sc*i*
alias: pci:v00008086d000015AEsv*sd*bc*sc*i*
alias: pci:v00008086d000015ADsv*sd*bc*sc*i*
alias: pci:v00008086d000015ACsv*sd*bc*sc*i*
alias: pci:v00008086d000015ABsv*sd*bc*sc*i*
alias: pci:v00008086d000015B0sv*sd*bc*sc*i*
alias: pci:v00008086d000015AAsv*sd*bc*sc*i*
alias: pci:v00008086d000015D1sv*sd*bc*sc*i*
alias: pci:v00008086d00001563sv*sd*bc*sc*i*
alias: pci:v00008086d00001560sv*sd*bc*sc*i*
alias: pci:v00008086d00001558sv*sd*bc*sc*i*
alias: pci:v00008086d0000154Asv*sd*bc*sc*i*
alias: pci:v00008086d00001557sv*sd*bc*sc*i*
alias: pci:v00008086d0000154Dsv*sd*bc*sc*i*
alias: pci:v00008086d00001528sv*sd*bc*sc*i*
alias: pci:v00008086d000010F8sv*sd*bc*sc*i*
alias: pci:v00008086d0000151Csv*sd*bc*sc*i*
alias: pci:v00008086d00001529sv*sd*bc*sc*i*
alias: pci:v00008086d0000152Asv*sd*bc*sc*i*
alias: pci:v00008086d000010F9sv*sd*bc*sc*i*
alias: pci:v00008086d00001514sv*sd*bc*sc*i*
alias: pci:v00008086d00001507sv*sd*bc*sc*i*
alias: pci:v00008086d000010FBsv*sd*bc*sc*i*
alias: pci:v00008086d00001517sv*sd*bc*sc*i*
alias: pci:v00008086d000010FCsv*sd*bc*sc*i*
alias: pci:v00008086d000010F7sv*sd*bc*sc*i*
alias: pci:v00008086d00001508sv*sd*bc*sc*i*
alias: pci:v00008086d000010DBsv*sd*bc*sc*i*
alias: pci:v00008086d000010F4sv*sd*bc*sc*i*
alias: pci:v00008086d000010E1sv*sd*bc*sc*i*
alias: pci:v00008086d000010F1sv*sd*bc*sc*i*
alias: pci:v00008086d000010ECsv*sd*bc*sc*i*
alias: pci:v00008086d000010DDsv*sd*bc*sc*i*
alias: pci:v00008086d0000150Bsv*sd*bc*sc*i*
alias: pci:v00008086d000010C8sv*sd*bc*sc*i*
alias: pci:v00008086d000010C7sv*sd*bc*sc*i*
alias: pci:v00008086d000010C6sv*sd*bc*sc*i*
alias: pci:v00008086d000010B6sv*sd*bc*sc*i*
depends: dca
retpoline: Y
name: ixgbe
vermagic: 5.3.18-2-pve SMP mod_unload modversions
parm: IntMode:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default 2 (array of int)
parm: InterruptType:Change Interrupt Mode (0=Legacy, 1=MSI, 2=MSI-X), default IntMode (deprecated) (array of int)
parm: MQ:Disable or enable Multiple Queues, default 1 (array of int)
parm: DCA:Disable or enable Direct Cache Access, 0=disabled, 1=descriptor only, 2=descriptor and data (array of int)
parm: RSS:Number of Receive-Side Scaling Descriptor Queues, default 0=number of cpus (array of int)
parm: VMDQ:Number of Virtual Machine Device Queues: 0/1 = disable (1 queue) 2-16 enable (default=8) (array of int)
parm: max_vfs:Number of Virtual Functions: 0 = disable (default), 1-63 = enable this many VFs (array of int)
parm: VEPA:VEPA Bridge Mode: 0 = VEB (default), 1 = VEPA (array of int)
parm: InterruptThrottleRate:Maximum interrupts per second, per vector, (0,1,956-488281), default 1 (array of int)
parm: LLIPort:Low Latency Interrupt TCP Port (0-65535) (array of int)
parm: LLIPush:Low Latency Interrupt on TCP Push flag (0,1) (array of int)
parm: LLISize:Low Latency Interrupt on Packet Size (0-1500) (array of int)
parm: LLIEType:Low Latency Interrupt Ethernet Protocol Type (array of int)
parm: LLIVLANP:Low Latency Interrupt on VLAN priority threshold (array of int)
parm: FdirPballoc:Flow Director packet buffer allocation level:
1 = 8k hash filters or 2k perfect filters
2 = 16k hash filters or 4k perfect filters
3 = 32k hash filters or 8k perfect filters (array of int)
parm: AtrSampleRate:Software ATR Tx packet sample rate (array of int)
parm: FCoE:Disable or enable FCoE Offload, default 1 (array of int)
parm: MDD:Malicious Driver Detection: (0,1), default 1 = on (array of int)
parm: LRO:Large Receive Offload (0,1), default 0 = off (array of int)
parm: allow_unsupported_sfp:Allow unsupported and untested SFP+ modules on 82599 based adapters, default 0 = Disable (array of int)
parm: dmac_watchdog:DMA coalescing watchdog in microseconds (0,41-10000), default 0 = off (array of int)
parm: vxlan_rx:VXLAN receive checksum offload (0,1), default 1 = Enable (array of int)


#ethtool -i ens3f1
driver: ixgbe
version: 5.6.5
firmware-version: 0x800003e1, 255.65535.255
expansion-rom-version:
bus-info: 0000:01:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes


Win10 - VM mit iperf

0.00-10.00 sec 197 MBytes 165 Mbits/sec

Debian 10 VM

ethtool ens18
Settings for ens18:
Supported ports: [ ]
Supported link modes: Not reported
Supported pause frame use: No
Supports auto-negotiation: No
Supported FEC modes: Not reported
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Advertised FEC modes: Not reported
Speed: Unknown!
Duplex: Unknown! (255)
Port: Other
PHYAD: 0
Transceiver: internal
Auto-negotiation: off
Link detected: yes

iperf -c 10.33.33.222 -p 39160
------------------------------------------------------------
Client connecting to 10.33.33.222, TCP port 39160
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.33.33.100 port 47344 connected with 10.33.33.222 port 39160
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 207 MBytes 174 Mbits/sec



Win10 ist nicht das Hauptproblem Debian ist bei mir wichtiger, aber da gehts auch nicht schneller!


MfG Franko
 
hier noch qm config VMID

Debian 10

qm config 103
agent: 1
balloon: 1000
boot: c
bootdisk: virtio0
cores: 4
memory: 17000
name: nxc
net0: virtio=6E:43:4D:DB:1A:D6,bridge=vmbr3,firewall=1
numa: 0
onboot: 1
ostype: l26
parent: b06
protection: 1
scsihw: virtio-scsi-pci
smbios1: uuid=a67cd67c-1a8a-434a-b436-0269fc160458
sockets: 2
spice_enhancements: foldersharing=1
virtio0: WD2x6:vm-103-disk-0,size=4300G
vmgenid: 8d768993-85b8-49af-a10e-cbc28f457f1f




qm config VMID Win10

agent: 1
balloon: 1000
bios: ovmf
bootdisk: sata0
cores: 4
memory: 10000
name: WIN10
net1: virtio=C2:B7:69:64:FC:DB,bridge=vmbr3,firewall=1
numa: 0
ostype: win10
parent: a02
protection: 1
sata0: SSD1.46:vm-102-disk-0,cache=writeback,size=300G
sata2: local:iso/virtio-win-0.1.173.iso,media=cdrom,size=384670K
smbios1: uuid=d5b13f42-5e7e-40d2-9753-9744593e8a83
sockets: 2
vga: virtio
vmgenid: 4d1a153e-87d1-43ed-97da-371995df8b64
 
virtio NIC should be a good choice for your VMs. Could you please try to
  • Create a fresh Proxmox VE VM
  • Disable the firewall for it
  • Update and install some iperf
  • While monitoring resource usage (especially CPU usage on the client) run iperf again with this VM both as client and server
Und cat /etc/network/interfaces am Host.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!