We also have problems with this.
For years.
We had an old 2008 R2 server that suddenly lost network, the network card showed it was still connected and everything but nothing could get through. Resetting the network card sometimes worked, and sometimes didn't.
A full reboot of the VM was the only thing that was guaranteed to work.
This could happen once a month, or once per week.
We recently setup two new servers running Windows Server 2019, a DC and an terminal server running some ERP software.
These are standard new Windows VM's with the E1000.
After a week we have had to restart the DC once and the terminal server we sometimes have to restart every 12 hours because of the connectivity issues.
We have had this weird problem for over two years now, and we always thought it was the old windows server os doing, but now this problem happens even more on new VM's running new up to date Windows Server 2019's.
The trend seems to be if the server is running software that does use a lot of network resources/connections like our ERP software. it will end up loosing network connectivity.
This seems to not have anything to do with network transfers since our sql server does not have this issue, and we transfer a lot data out of that VM every day.
We have updated Proxmox regularly for years and the issue just don't want to go away.
In retrospect I think this issue was introduced in Proxmox VE 5.
I will try the latest VirtIO (virtio-win-0.1.173) drivers and see if it gets any better after people get back from easter vacation.
Hi,
sorry for my bad english
I solved the problem i had from months ago on SERVER 2019. I don't touch the parameters of the network but the "max RSS queue". I set the RSS queue to 1 in the network device and in the proxmox network hardware. I also set down the firewall of the network
We also have problems with this.
For years.
We had an old 2008 R2 server that suddenly lost network, the network card showed it was still connected and everything but nothing could get through. Resetting the network card sometimes worked, and sometimes didn't.
A full reboot of the VM was the only thing that was guaranteed to work.
This could happen once a month, or once per week.
We recently setup two new servers running Windows Server 2019, a DC and an terminal server running some ERP software.
These are standard new Windows VM's with the E1000.
After a week we have had to restart the DC once and the terminal server we sometimes have to restart every 12 hours because of the connectivity issues.
We have had this weird problem for over two years now, and we always thought it was the old windows server os doing, but now this problem happens even more on new VM's running new up to date Windows Server 2019's.
The trend seems to be if the server is running software that does use a lot of network resources/connections like our ERP software. it will end up loosing network connectivity.
This seems to not have anything to do with network transfers since our sql server does not have this issue, and we transfer a lot data out of that VM every day.
We have updated Proxmox regularly for years and the issue just don't want to go away.
In retrospect I think this issue was introduced in Proxmox VE 5.
I will try the latest VirtIO (virtio-win-0.1.173) drivers and see if it gets any better after people get back from easter vacation.
I have the same problem with Proxmox 5.2.3 and Windows 2012 R2. It happens with virtio and E1000 drivers. There is no entry in the Windows-, or Proxmox log, it just looses connectivity and the only way to fix this is to reboot the VM. I install driver from virtio-win-0.1.141.iso, The guest is running as a Ceph OSD host with 40 OSD processes. The Ceph cluster shares out RBD images to other hosts. Is this problem related to storage?
1.on the proxmox host,A tcpdump on the bugged interface will show only ARP requests being sent by the server and unanswered.
2.Virtual machine can send ARP packets to the outside
3.The virtual machine cannot receive packets
VM142, network: tap142i0
View attachment 19982
View attachment 19983
hi,
so windows guest receive the arp replie, but don't register it in his arp table ?
Does using a virtio NIC on the VMs make any difference? I think that may have fixed it in my case but it's still somewhat too soon to tell.So I can reproduce this CONSTANTLY...
I'm running Blue Iris on a Windows Server 2016 VM. Once half or more of my cameras connect, recent VirtIO drivers (post .141) start producing some CRAZY latency. The e1000 works perfectly but will randomly drop after backups run FROM A COMPLETELY DIFFERENT 10GBIT INTERFACE! I have found that Check_MK reports NFS drops during backups (NFS share for all VM Backups) but it really doesn't drop. If I'm copying large data to a Windows VM, shares hosted on it will randomly drop. No matter what I do Windows networking, under load, results in packet loss... I'm running a three node cluster, ceph (Fiber 10Gbit SAN), GBit INTEL nics and Linux Bridges. Something is up with Linux bridges... when I ran openvswitch I never had these issues but I would prefer to use the GUI to configure VLANs and such... Help?
See screenshot -> this is the result of a ICMP to google when I have 5 or more cameras connected to Blue Iris. This happens to ALL my windows VMs under load...
Does using a virtio NIC on the VMs make any difference? I think that may have fixed it in my case but it's still somewhat too soon to tell.
Also, I think, although I could be wrong, that the E1000 NICs just use drivers included with Windows and don't have any applicable drivers included with the virtio drivers.
Shockingly the VMWare emulated NICs seem rock solid. My camera server hasn't needed a reboot in weeks. Monitoring server is happier too. I also switched backups to a NAS via CIFS share on the SAN subnet.I'm having similiar issues. Double digits on the proxmox node loads, but rather "high" CPU as well (200-600% on some threads). First thought was to upgrade the VirtIO drivers (to .185), but after about a week or so, same issue, back to stopping the VM and starting back up, since the console won't respond. Has anybody tried with the RTL8139 NIC? Or is that just a folly?