After running kernel 4.15.18-7-pve for a couple of weeks we HAVENT encounter any of the problems mentioned before by or any of the other users. The nod is running fine and it is housing mix of LXC and KVMs.
Hello again,
we really need some help now. We're paying a huge amount of money to Proxmox per year and would like to have this issue solved. We cannot upgrade _any_ machine and are stuck with an old version. Please help us.
Appreciated.
The problem is it exists only in newer kernel versions. We're running version 4.15.18-4-pve #1 SMP PVE 4.15.18-23 (Thu, 30 Aug 2018 13:04:08 +0200) because all kernels released afterwards are faulty, although we could not find any changes in your git repository concerning our observed bug.Do you run latest kernel? If not, please upgrade and test again.
If you have a valid subscription with support ticket support, please get in touch via our enterprise support team via https://my.proxmox.com
I've been experiencing a similar issue, but *always* with pfSense & opnSense VMs, and the only thing I could see as the reason might've been "high" bandwidth, (but then I have Debian based VMs doing much much more network IO, but then those aren't internet facing as the opnSense/pfSense)... and another one (okay, not "google-able" though internet facing) that never gave the same problems.
I initally thought it was the cluster setup between the two OVH DCs over the vRack interfaces, but the opnSense isn't clustered, but is now the one the gives the most "hangs". Typically rebooting the VM (quick solution) fixes things, but will need to relook at the opnSense whether I need to replace with FortiGate-VM if it'll be the same troubles??
root@proxsb01:~# pveversion -v
proxmox-ve: 5.4-2 (running kernel: 4.15.18-20-pve)
pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec)
pve-kernel-4.15: 5.4-8
pve-kernel-4.15.18-20-pve: 4.15.18-46
pve-kernel-4.15.18-14-pve: 4.15.18-39
pve-kernel-4.15.18-13-pve: 4.15.18-37
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-54
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-6
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
openvswitch-switch: 2.7.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-40
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
pve-zsync: 1.7-4
qemu-server: 5.0-54
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.13-pve1~bpo2
Hmmm... now that you mention it, those that give "problems" are e1000 based, while the cluster without trouble is virtio... let me change/fix those and see...Are you using virtio network cards or intel?
Did you disable all the offloading in pf/opnsense?
See also https://pfstore.com.au/blogs/guides/run-opnsense-in-proxmox
I believe in the beginning there were some issues with Virtio net driver (or not supported) with pfSense, but that was maybe 3-4 years ago... there actually was a pfSense fork (nice interface) that took out the hardware related stuff, and build a FreeBSD kernel/etc. with the virtio stuff... can't find it now, but in those days the e1000 was "needed" on pfSense, so yes coming from those and upgraded from those older pfSense to today it was just "sticked" toi cant think of any reason not to use virtio, but be sure to disable all the different offloading in the firewall
VirtIO or E1000 adapters for the guests??Hello!
I have the same problem. When the network adpapter is on high load (we are using 1GB network), the server stops responding to network requests.
Proxmox 5VirtIO or E1000 adapters for the guests??
Any ideas?Proxmox 5
Linux guest - virtio
Proxmox 6
Windows Guest E1000