If we boot a VM/guest on kernel 4.14.12 with KPTI enabled, it'll not longer show netfilter stats as on earlier kernels (4.13.4 and less), eg. always returning zero value by:
Can really find a good reason on the 'Net'.
Anyone knows why?
Sorry haven't got the luxury of a similar testlab, only got an older PVE 3.4 lab :/
Guests are running EPEL kernel-ml 4.x, CentOS 6 it self are on kernel 2.6, also happened at least on previous EPEL kernel-ml 4.13.4-1
It happened but both with host on proxmox-ve: 4.4-101 (no KPTI) and now on proxmox-ve: 4.4-102 (KPTI)
KPTI is not enabled in guest. (Testing is a bit hard as we're talk full 24x7 production site :)
Managed to capture on serial console, see attached text file.
guest is a CentOS 6.9 just patched as of today.
CentOS release 6.9 (Final)
Kernel 4.14.12-1.el6.elrepo.x86_64 on an x86_64
hapA login:
root@n1:~# cat /etc/pve/qemu-server/400.conf
#HA proxy load balancer node A
bootdisk: virtio0...
Weirdly enough other VMs with even higher network traffic but running nginx load balancers instead of HAproxy don't seem to crash during live migration. HAproxy VMs didn't crash either in the past, maybe it's due to a newer HAproxy version (1.7.9) that in past... VMs are otherwise similar, same...
Last two live migrations of a VM running relative much network traffic seemed to crash the VM on target host at resume in virt-net driver. See attached SD from target VM console.
Booting back in the previous kernel 2.6.32-46 and starting networking manually vlans work again.
(This should properly go into the networking forrum instead...)
Wondering what changed in the kernel causing vlans not to function. Hints anyone?
Got an older 7x node 3.4 testlab (running Ceph Hammer 0.94.9 on 4x of the nodes and only VMs on 3x nodes), which we wanted to patch up today, but after rebooting our OSD won't start, seems ceph can't connect to ceph cluster. Wondering why that might be?
Previous version before patching...
Currently swap is off w/swappiness=0 so assume to ought to avoid swapping out pages at all.
At what levels are people allocating host memory for VM usage while remembering to be able to migrate VMs from a downed/upgrading host?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.