I just saw, there where a lot of fixes in NetKVM for Windows in the latest virtio drivers:
https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG
https://fedorapeople.org/groups/virt/virtio-win/CHANGELOG
Hi Andreas,I thought about it and should add a few details about my network setup:
The NICs of both Windows Server 2016 VMs are connected to a bridge:
Code:brctl show vmbr1 bridge name bridge id STP enabled interfaces vmbr1 8000.aace816c169a no tap0 tap100i0 tap101i0
tap0 is a OpenVPN server tap device and the other two are the VMs.
So there is no physical device attached to this particular bridge. The NICs are shows as 10 GBit devs inside the VMs. So could it just be that the VMs can send quicker than the bridge can handle the traffic?
Is it a good idea to us the "Rate Limit" on the NICs in the PVE configuration? Since the "outside" is connected via WAN / OpenVPN it would probably never exceed 300 MBits and this would still be enough for backups and everything else.
Someone with experience on this? Am I thinking in the wrong direction?
Hi Udo,Hi Andreas,
your network has no real interface - does this mean all traffic (backup, Datev-DB, rdp is going through the openvpn-connection?
Or is the high network load between VM 100 + VM 101?
If the main traffic on openvpn, how looks the pings between VM 100 + VM 101 if this happens?!
Perhaps the openvpn-tunnel is the problem? (MTU?)
And like mac.linux.free allready wrote, OpenVswitch is perhaps a solution - I have measure an better performance with openvswitch against the normal linux-bridge.
iptables --list | grep queue
Interesting. So when you don't use NFQUEUE your network is reliable even under IO load in your VMs? And that's the only thing you've changed?
I wonder if we have the same issue then, since from my knowledge I don't use NFQUEUE. At least not on purpose.
I'm using shorewall on my PVE host, which basically manages iptables based on config files.
I have very basic rules: Drop / Reject policy on my interfaces and the Accept for the few ports I use.
How can I check if I use NFQUEUE?
does not return anything.Code:iptables --list | grep queue
iptables -nvL |grep NFQUEUE
#XXXX
agent: 1
args: -cpu kvm64,+ssse3
balloon: 16384
bootdisk: virtio0
cores: 8
memory: 32768
name: XXXX.XXXXX.XXX
net0: virtio=52:16:78:785:10,bridge=vmbr1,tag=XXX
net1: virtio=4A:1D:FC:A4:BA:71,bridge=vmbr1,tag=XXX
net2: virtio=4A:C2:97:37:3A:92,bridge=vmbr1,tag=XXX
net3: virtio=BA:E0:74:68:003,bridge=vmbr1,tag=XXX
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=8b3488e0-2dc2-4308-b7fc-9901c2cc1e83
sockets: 1
startup: order=1
tablet: 0
vga: qxl
virtio0: airraid:vm-103-disk-1,size=20G
Since the ping requests are handled mainly by CPU / RAM I didn't expect local storage IO to introduce this kind of latency. Even not with IOwait.if your storage is so slow that a ping request is blockend by a pending write on the storage, you should maybe start benchmarking the storage too
Are you using VirtiO Network drivers for the VM NIC ?
It would be interesting to see the vm.conf
#MS Windows Server 2016 Std - Terminal Server
agent: 1
boot: dc
bootdisk: scsi0
cores: 6
ide2: none,media=cdrom
memory: 131072
name: winsrvts
net0: virtio=FF:FF:FF:FF:FF:FF,bridge=vmbr1
numa: 0
onboot: 1
ostype: win10
scsi0: local-zfs:vm-101-disk-1,discard=on,size=500G
scsihw: virtio-scsi-pci
smbios1: uuid=ae1a0841-b9a1-42e9-aea9-1fd0eff3af30
sockets: 1
startup: order=2
I found some other thread and bug report filled about it. Looks like the same problem.
https://forum.proxmox.com/threads/k...ckup-restore-migrate.34362/page-2#post-174960
https://bugzilla.proxmox.com/show_bug.cgi?id=1453