I'm seeing some kernel crashes with igc (intel nic driver) on the current kernel. How often is a new kernel version released?
If this kernel is based off ubuntu; where can I check what version ubuntu is on?
my latest crash
https://pastebin.com/X1X3gbwf
anyone find a solution other than editing every single container and restarting it with that option? It feels like there must be something on the OS wide level that could be changed to avoid this hassle.
Hi, I found the "CPU affinity" (cpu pinning) setting for VMs inside hardware panel in proxmox but I am unable to find this setting for LXC containers.
How can I pin a specific LXC container to specific CPU cores in PVE 7.3.3?
Just to make sure I understand, there is a different RPC made by pvestatd that is waiting for response that is timing out after 10 seconds causing the `pvestatd` message in syslog?
OK my script returned an instance timestamp for me to try to correlate.
Run 234 + Thu 24 Nov 2022 03:33:03 AM...
thank you for help confirming :)
I'll try CPU affinity settings to "12,13,14,15" and watch htop on proxmox to see if only those cores are maxed out and keep an eye on power consumption too
thanks for the pointer. I exclusively use NFS version 4.2 - taking a hint from the script you shared I wrote an endless loop script.
#!/bin/bash
# Basic script to run on an endless loop on a NFS client
# idea is to catch failure in rpcinfo and log timestamp.
counter=1
while :
do
echo "Run...
I'm trying to debug a condition where it seems that using NFS v4.2 there is a brief period of connection timeout between proxmox and my NFS server.
Looks like `pvestatd` monitors storage mounts and prints some useful messages - can these be made more verbose?
Nov 23 15:43:16 centrix...
Not to thread hijack but related question. "CPU Affinity" under Processors setting of the VM does not seem to allow me to explicitly select which CPU cores I want mapped to a VM... Can you add this feature please?
Rationale: Intel's latest Raptor Lake/Alder Lake processors have "performance"...
I'm trying to permanently passthrough a Samsung 980 NVME m.2 disk to VMs. The problem is that my host node continues to load this device with 'nvme' driver instead of vfio for passthrough.
I have followed all of the documentation and recommended steps; yet I can't seem to unbind this specific...
This is perfect timing. I had a new Intel N6000 processor that wasn't working on kernel 5.15 - I was using https://github.com/fabianishere/pve-edge-kernel to fill in the gap to get proper hardware working.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.