Search results

  1. N

    e1000 driver hang

    I'm still getting the same "Detected Hardware Unit Hang" errors sporadically when using PVE kernel 5.4. Mar 19 20:11:15 pve-host1.local kernel: [30377.339967] e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang: I recall there was previously some advice around setting: ethtool -K <ADAPTER>...
  2. N

    e1000 driver hang

    Great Thanks! I'll take a look at testing the PVE 5.4 kernel! Will report back but it would be good to know here if others have tried the PVE 5.4 kernel and what the results were for this issue... Thanks!
  3. N

    e1000 driver hang

    Has anyone tested this yet and any results they can share? @spirit - Could you provide details of the patch along with how we would installed the patched kernel to test?
  4. N

    Proxmox 5.2 Gemini Lake and IGD (graphics) passthrough for Ubuntu 18

    What’s your configuration? Are you passing through the entire host GPU to a single VM or have you tried the gvt-g (mediated devices) method (if indeed this is available for Gemini Lake - It works on my Coffee Lake headless VMs)
  5. N

    Regular ZED messages in syslog?

    Hi There, I'm running latest pve and using ZFS replication between local-zfs on 2 nodes. Every time the replication runs it seems to output the following type of messages in syslog without any clear meaning what they are referring to or whether they are just unnecessary noise? Feb 28 07:29:01...
  6. N

    e1000 driver hang

    I'm getting this same "eno1: Detected Hardware Unit Hang" in the syslog regularly. I haven't yet experimented with disabling the features of the NIC but I did notice that my NIC (on a fairly new Intel NUC8I5BEH) is listed as follows: # lspci -v | grep Ethernet 00:1f.6 Ethernet controller: Intel...
  7. N

    Proxmox instalation on Single NVMe/SSD -Homelab

    Yes. I have one NUC using a single NVMe and one NUC using a single SSD. Both are configured to use the ZFS file system and are clustered together (with a qdevice as a 3rd node for quorum)... I replicate the local ZFS based VMs between the nodes but also back them up daily to shared NFS storage...
  8. N

    [SOLVED] Lots of "CPUx: Package temperature above threshold, cpu clock throttled" syslog entries?

    I have Docker running inside a VM on the PVE cluster. Inside Docker runs the TIG (Telegraf, InfluxDB and Grafana) containers. I then installed SNMP server on all PVE physical hosts and use Telegraf (Docker VM) to poll the PVE hosts for generic CPU/Mem/Disk/Network SMNP metrics, write the...
  9. N

    Live Migration with GVT-g (mdev) passthrough device?

    Ok thanks. So sounds like it is theoretically possible, just that we have to wait until PVE updates to kernel 5.4+?
  10. N

    Live Migration with GVT-g (mdev) passthrough device?

    Hi There, I currently have a 2-node Proxmox 6.1 cluster (3rd node is qdevice) where both nodes are the same hardware spec and support Intel GVT-g (mediated device) Passthrough. I know live migration between nodes is not possible with standard GPU device Passthrough but, if GVT-g enables the...
  11. N

    [SOLVED] Intel coffee lake gGVT issue.

    I've raised this as a bug at the following URL: https://bugzilla.proxmox.com/show_bug.cgi?id=2510 Thanks!
  12. N

    [SOLVED] Intel coffee lake gGVT issue.

    I'm seeing the same issue in testing with passing through the IGP using Intel GVT-d ... It looks like this issue was also previously seen when passing through the full IGP (not as mediated device) and was fixed for that scenario but looks like the mediated device passthrough still exhibits this...
  13. N

    [SOLVED] Proxmox 6.0 Gemini Lake and IGD (graphics) passthrough for Windows 10

    What happens if you remove the args: -device vfio-pci,host=00:02.0,addr=0x18,x-igd-opregion=on" from the VM conf and simply pass the GPU through using the webgui (which results in the line hostpci0: 00:02.0 being added to the config)? This is the way i do it for a VM with IGP passthrough but I...
  14. N

    Linux Kernel 5.3 for Proxmox VE

    Off-topic but the number at the end refers to the plug type that is supplied with the NUC (US, EU, UK etc) ...
  15. N

    Linux Kernel 5.3 for Proxmox VE

    So does this mean we don’t have to enable the test repository any more to install the 5.3 kernel and it can be installed using “apt install” if the no-subscription repository is enabled? I don't see the 5.3 Kernel listed as part of the "Available Updates" panel in the Web GUI? Regarding the...
  16. N

    Changing selected kernel for boot (systemd-boot) on headless PVE Host?

    The following is the output of pve-efiboot-tool kernel list root@pve6-test1:~# pve-efiboot-tool kernel list Manually selected kernels: None. Automatically selected kernels: 5.0.15-1-pve 5.0.21-3-pve 5.3.7-1-pve So, I then ran the following commands as you stated above: root@pve6-test1:~#...
  17. N

    Changing selected kernel for boot (systemd-boot) on headless PVE Host?

    Hi @fabian I tried the work around above and the following is the output. On rebooting the 5.3 Kernel is still shown in the EFI Boot Menu (as default) and is loaded despite the output from the commands above? Thanks! root@pve6-test1:~# uname -a Linux pve6-test1 5.3.7-1-pve #1 SMP PVE 5.3.7-1...
  18. N

    Changing selected kernel for boot (systemd-boot) on headless PVE Host?

    Hi @fabian I see you’ve logged a bug report for this so I’ll also monitor the updates there. https://bugzilla.proxmox.com/show_bug.cgi?id=2448 Is there anyway currently to work around this issue prior to it occurring (ie before removing the currently booted kernel) or to fix the module...
  19. N

    Changing selected kernel for boot (systemd-boot) on headless PVE Host?

    Results for the two commands below - Note, lsmod | grep nls does not return anything ...
  20. N

    Changing selected kernel for boot (systemd-boot) on headless PVE Host?

    lsmod nls_iso88591-1 command just errors with "Usage: lsmod" (I think there is a mistake in your commands above?) lsmod (run jus as "lsmod") output as follows: uname -a output as follows: Thanks!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!