Separate names with a comma.
we are using host cpu in VMs and same test (iperf params) from hosts.
Why in VM
ens19: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9000...
CPU is not an issue we have 40 cores per node. RAM is not too. We have 512G-1T per node. We are trying with 1VM per node. Inside VM with 16 vCPU...
What additional information do you need?
Hosts CPUs are 4x Xeon E7 48xxx, totally 4 sockets.
How it can be more than 12 if driver is 10.
iperf -c ...
Sure we are using kernel with your patches
4.15.18-10-pve #1 SMP PVE 4.15.18-32 (Sat, 19 Jan 2019 10:09:37 +0100) x86_64
I did check buffers...
Yes. it can be if it is not ipv6.
It is proxmox kernel. What to check? What to do?
qm config 127
Settings for ens8f1:
Supported ports: [ FIBRE ]
Supported link modes: 1000baseT/Full...
iperf without additional keys
hot to host same 12G
proxmox-ve: 5.3-1 (running kernel: 4.15.18-11-pve)
pve-manager: 5.3-9 (running...
Maybe it is no new and there is something already available but i cant find. Please point me into the right direction.
What we have:
I did try with MTU 9000 on nodes, on bridges (vmbr) and on VM interface.
It is better now from VM to host - 14.7 Gbit/sec, from VM to VM on the...
@janos thank you!
@janos, can you please point me into one of them. I can't find something workable.
Did someone get success to get 10Gbit/s inside VM?
I'm using HP DL 580 G7 with 10Gbit/s NetXen interfaces. Proxmox - 5.2.