Search results

  1. stefws

    guest on kernel 4.14-12 fails to show NF conntrack

    report @kernel.org Bug 198479 and @EPEL 0000816
  2. stefws

    guest on kernel 4.14-12 fails to show NF conntrack

    Right, got a bit of trouble though finding whoever for EPEL kernel-ml, it's under Fedora somehow...
  3. stefws

    guest on kernel 4.14-12 fails to show NF conntrack

    Also my first thought, but as Fabian says... Assume conntrack-tools also utilizes /proc
  4. stefws

    NF tunning not applied at boot time

    and howto possible avoid this...
  5. stefws

    guest on kernel 4.14-12 fails to show NF conntrack

    If we boot a VM/guest on kernel 4.14.12 with KPTI enabled, it'll not longer show netfilter stats as on earlier kernels (4.13.4 and less), eg. always returning zero value by: Can really find a good reason on the 'Net'. Anyone knows why?
  6. stefws

    NF tunning not applied at boot time

    Have this config file on hypervisor/host nodes: But after boot still find default values and wonder why:
  7. stefws

    PVE 4.4 - VM crashes after live migration if virt-net is highly loaded

    Sorry guest kernel 4.14.12 has KPTI enabled, but crash happed also with 4.13.4 wo KPTI...
  8. stefws

    PVE 4.4 - VM crashes after live migration if virt-net is highly loaded

    Sorry haven't got the luxury of a similar testlab, only got an older PVE 3.4 lab :/ Guests are running EPEL kernel-ml 4.x, CentOS 6 it self are on kernel 2.6, also happened at least on previous EPEL kernel-ml 4.13.4-1
  9. stefws

    PVE 4.4 - VM crashes after live migration if virt-net is highly loaded

    It happened but both with host on proxmox-ve: 4.4-101 (no KPTI) and now on proxmox-ve: 4.4-102 (KPTI) KPTI is not enabled in guest. (Testing is a bit hard as we're talk full 24x7 production site :)
  10. stefws

    PVE 4.4 - VM crashes after live migration if virt-net is highly loaded

    Managed to capture on serial console, see attached text file. guest is a CentOS 6.9 just patched as of today. CentOS release 6.9 (Final) Kernel 4.14.12-1.el6.elrepo.x86_64 on an x86_64 hapA login: root@n1:~# cat /etc/pve/qemu-server/400.conf #HA proxy load balancer node A bootdisk: virtio0...
  11. stefws

    PVE 4.4 - VM crashes after live migration if virt-net is highly loaded

    Weirdly enough other VMs with even higher network traffic but running nginx load balancers instead of HAproxy don't seem to crash during live migration. HAproxy VMs didn't crash either in the past, maybe it's due to a newer HAproxy version (1.7.9) that in past... VMs are otherwise similar, same...
  12. stefws

    PVE 4.4 - VM crashes after live migration if virt-net is highly loaded

    Hm too late, have of course power cycled it also got not serial console attached (also forgot howto connect such from the past)
  13. stefws

    PVE 4.4 - VM crashes after live migration if virt-net is highly loaded

    Last two live migrations of a VM running relative much network traffic seemed to crash the VM on target host at resume in virt-net driver. See attached SD from target VM console.
  14. stefws

    patched from 3.4.15 to 3.4.16, now ceph 0.94.9 fails to start

    Booting back in the previous kernel 2.6.32-46 and starting networking manually vlans work again. (This should properly go into the networking forrum instead...) Wondering what changed in the kernel causing vlans not to function. Hints anyone?
  15. stefws

    patched from 3.4.15 to 3.4.16, now ceph 0.94.9 fails to start

    In /etc/network/interfaces always had this vlan on top of a openvswitch bond: Wondering why this changed or causes our vlans to fail now:
  16. stefws

    patched from 3.4.15 to 3.4.16, now ceph 0.94.9 fails to start

    Seems debian now load a kernel module named vxlan used by openvswitch and the patched node's various vlan ain't working, digging into to this...
  17. stefws

    patched from 3.4.15 to 3.4.16, now ceph 0.94.9 fails to start

    Got an older 7x node 3.4 testlab (running Ceph Hammer 0.94.9 on 4x of the nodes and only VMs on 3x nodes), which we wanted to patch up today, but after rebooting our OSD won't start, seems ceph can't connect to ceph cluster. Wondering why that might be? Previous version before patching...
  18. stefws

    4.4 and memory usage

    Have add vm.swappiness = 0 to /etc/sysctl.d/pve_local.conf because I've read this note from IBM
  19. stefws

    4.4 and memory usage

    Currently swap is off w/swappiness=0 so assume to ought to avoid swapping out pages at all. At what levels are people allocating host memory for VM usage while remembering to be able to migrate VMs from a downed/upgrading host?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!