Connection problems under load

EdwardBeen

New Member
Jan 29, 2026
6
0
1
Hello, I am using Proxmox on my server and noticed that when I run several Windows 10 virtual machines, network connection problems appear on the server. At some point, all connections simply drop for a few seconds. I use Moonlight to manage the virtual machine, and at some point the image freezes for a couple of seconds, and when it unfreezes, all connections are already broken. I also noticed that this problem is not reproduced under synthetic load. By synthetic, I mean that I forced several virtual machines to download large files at maximum speed at the same time. As I understand it, it manifests itself only when there is a high CPU load inside the virtual machine. Also, at the moment of freezing, I noticed that Uptime Kuma reports the unavailability of services that are 100% available at the moment with the error "timeout of 24000ms exceeded" or "getaddrinfo EAI_AGAIN". In my virtual machines, I used the Intel e1000e network adapter, as well as others, including Intel e1000 and virtio. On all of them, the same problem persists. Connection problems, if anything, appear on the entire server at once, and not in a specific virtual machine. At the same time, everything works fine on my computer. I will be grateful for your help. The server uses a Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15) network card with driver version r8169. I am ready to provide any details, just tell me what is needed.
 
Hey, to help you, version information would be helpful

pveversion --verbose

also, system logs during such incident would be appreciated:

journalctl -b

You can view logs of previous boots by supplying a number like so: journalctl -b-1 # previous boot
 
Last edited:
Hey, to help you, version information would be helpful

pveversion --verbose

also, system logs during such incident would be appreciated:

journalctl -b

You can view logs of previous boots by supplying a number like so: journalctl -b-1 # previous boot

pveversion --verbose
proxmox-ve: 8.4.0 (running kernel: 6.5.13-6-pve)
pve-manager: 8.4.14 (running version: 8.4.14/b502d23c55afcba1)
proxmox-kernel-helper: 8.1.4
proxmox-kernel-6.8: 6.8.12-17
proxmox-kernel-6.8.12-17-pve-signed: 6.8.12-17
proxmox-kernel-6.8.12-15-pve-signed: 6.8.12-15
proxmox-kernel-6.8.12-13-pve-signed: 6.8.12-13
proxmox-kernel-6.8.12-9-pve-signed: 6.8.12-9
proxmox-kernel-6.5.13-6-pve-signed: 6.5.13-6
proxmox-kernel-6.5: 6.5.13-6
ceph-fuse: 17.2.8-pve2
corosync: 3.1.9-pve1
criu: 3.17.1-2+deb12u2
frr-pythontools: 10.2.3-1+pve1
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx11
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libknet1: 1.30-pve2
libproxmox-acme-perl: 1.6.0
libproxmox-backup-qemu0: 1.5.2
libproxmox-rs-perl: 0.3.5
libpve-access-control: 8.2.2
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.1.2
libpve-cluster-perl: 8.1.2
libpve-common-perl: 8.3.4
libpve-guest-common-perl: 5.2.2
libpve-http-server-perl: 5.2.2
libpve-network-perl: 0.11.2
libpve-rs-perl: 0.9.4
libpve-storage-perl: 8.3.7
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-2
lxcfs: 6.0.0-pve2
novnc-pve: 1.6.0-2
proxmox-backup-client: 3.4.7-1
proxmox-backup-file-restore: 3.4.7-1
proxmox-backup-restore-image: 0.7.0
proxmox-firewall: 0.7.1
proxmox-kernel-helper: 8.1.4
proxmox-mail-forward: 0.3.3
proxmox-mini-journalreader: 1.5
proxmox-offline-mirror-helper: 0.6.8
proxmox-widget-toolkit: 4.3.13
pve-cluster: 8.1.2
pve-container: 5.3.3
pve-docs: 8.4.1
pve-edk2-firmware: 4.2025.02-4~bpo12+1
pve-esxi-import-tools: 0.7.4
pve-firewall: 5.1.2
pve-firmware: 3.16-3
pve-ha-manager: 4.0.7
pve-i18n: 3.4.5
pve-qemu-kvm: 8.0.2-3
pve-xtermjs: 5.5.0-2
qemu-server: 8.4.5
smartmontools: 7.3-pve1
spiceterm: 3.3.1
swtpm: 0.8.0+pve1
vncterm: 1.8.1
zfsutils-linux: 2.2.8-pve1

When reproducing the problem, nothing is written in the logs.
 
Can you also provide the vm config? Is there a specific reason why you boot into the older kernel 6.5 over 6.8? maybe you can try that.
You can also try to set multiqueue to 2 or even 4 per network device, make sure that the guest has enough vCPUs.
 
Can you also provide the vm config? Is there a specific reason why you boot into the older kernel 6.5 over 6.8? maybe you can try that.
You can also try to set multiqueue to 2 or even 4 per network device, make sure that the guest has enough vCPUs.
vm config
args: -cpu 'host,hv_vendor_id=null,hv_vapic,hv_stimer,hv_time,hv_synic,hv_vpindex,hv_relaxed,+invtsc,-hypervisor'
audio0: device=intel-hda,driver=none
balloon: 0
bios: ovmf
boot: order=sata0
cores: 4
cpu: host,flags=+pcid
efidisk0: local-lvm:vm-904-disk-2,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:03:00.0,mdev=nvidia-46
machine: pc-q35-8.0
memory: 8192
meta: creation-qemu=8.0.2,ctime=1754816887
name: vm4
net0: virtio=00:1B:21:AA:B4:D3,bridge=vmbr0,firewall=1
numa: 0
ostype: win10
sata0: local-lvm:vm-904-disk-1,size=64G
sata1: zfs-pool:vm-904-disk-0,size=150G
sockets: 1
vmgenid: 962c5a0b-078c-4769-8167-d30742b9be7f

as for the reasons for running with the old kernel, I just haven't updated it for a while. I've updated it now and I'll check it again.
 
I also tried to completely update the system, increase the number of cores to 12, and roll back the realtek driver, but moonlight still freezes quite noticeably, and the connection issue persists.
 
I also tried to connect the internet to the virtual machine by forwarding another physical USB network adapter to the virtual machine, but the connection problems persisted.
 
I also tried to connect the internet to the virtual machine by forwarding another physical USB network adapter to the virtual machine, but the connection problems persisted.
Did you remove the other network adapter from the VM(s)? Usually vmbr0 connected to a virtual nic. Does this happen only with multiple VMs running?

There are a lot of users who had problems with your hosts network driver who fixed their problems by using the official realtek driver r8168 [0], both on the proxmox forum [1] [2] and the wider internet - this could be worth a try. Additionally, if you can pinpoint under which conditions exactly this starts happening would be great. Does it happen with other operating-systems too? Maybe you can grab some logs of the guest, if the host doesn't show anything out of the ordinary.

[0] https://packages.debian.org/sid/r8168-dkms
[1] https://forum.proxmox.com/threads/lan-zugriff-bricht-einfach-ab.178050/#post-825468
[2] https://forum.proxmox.com/threads/pve-9-realtek-rtl8111-issues-again.174205/#post-810291
 
Did you remove the other network adapter from the VM(s)? Usually vmbr0 connected to a virtual nic. Does this happen only with multiple VMs running?

There are a lot of users who had problems with your hosts network driver who fixed their problems by using the official realtek driver r8168 [0], both on the proxmox forum [1] [2] and the wider internet - this could be worth a try. Additionally, if you can pinpoint under which conditions exactly this starts happening would be great. Does it happen with other operating-systems too? Maybe you can grab some logs of the guest, if the host doesn't show anything out of the ordinary.

[0] https://packages.debian.org/sid/r8168-dkms
[1] https://forum.proxmox.com/threads/lan-zugriff-bricht-einfach-ab.178050/#post-825468
[2] https://forum.proxmox.com/threads/pve-9-realtek-rtl8111-issues-again.174205/#post-810291


In my previous message, I already mentioned that I tried rolling back the Realtek driver and it didn't yield any results. When using an external adapter, I connected to Wi-Fi within the virtual machine, then disabled the virtual network adapter in the Windows Control Panel (though it was still present in the virtual machine's configuration itself). I am still trying to pinpoint the exact moment this problem begins, but without success. So far, the last time this happened, it went like this:

A game was running in the virtual machine, the CPU load was no higher than 60 percent, and the game had already been running for about an hour when the problems suddenly started. They manifested as follows: the image in Moonlight froze for a few seconds, then it displayed a warning about a poor connection to the remote PC, but continued to work. At the same time, the connection in the game dropped, the image in Moonlight began to stutter, and soon the stream stopped completely with error code -1. All of this happened within 2–3 minutes, after which the connection stabilized.