Search results

  1. J

    e1000e eno1: Detected Hardware Unit Hang:

    I have not used PVE 9 yet. What kernel version does it use? There is this bug in the bug tracker, specific for kernel versions 6.8.12, but I have not been able to find a more recent one for PVE 9.
  2. J

    e1000e eno1: Detected Hardware Unit Hang:

    For future reference, as I presume users will keep on visiting this thread: dpkg -l |grep proxmox-kernel proxmox-boot-tool kernel pin 6.8.12-8-pve And reboot. P.S. Not sure if I already posted this before... :D
  3. J

    e1000e eno1: Detected Hardware Unit Hang:

    Would you confirm that kernel 6.8.12-13 fixed it for you? Which exact NIC model are you using?
  4. J

    e1000e eno1: Detected Hardware Unit Hang:

    From my experience, if you use kernel 6.8.12-8-pve, then you do not need to disable offloading, or adding additional parametres, for that matter.
  5. J

    How to reset DNS cache

    The same happens to me, both in the nodes and inside the LXC. I two following commands failed: # resolvectl flush-caches Failed to flush caches: Unit dbus-org.freedesktop.resolve1.service not found. # systemd-resolve --flush-caches Failed to flush caches: Unit...
  6. J

    e1000e eno1: Detected Hardware Unit Hang:

    Reverting back to version 6.8.12-8-pve should solve the issue.
  7. J

    e1000e eno1: Detected Hardware Unit Hang:

    I am closely following this bug in the Bugzilla of Proxmox, but it is specific to kernel version 6.8.12-9-pve and above, as it affects my PBS servers. However, I am also in charge of Proxmox VE 7.4-20 nodes still running kernels 5.15.158-2, which also seem affected by the NIC bug. Would anyone...
  8. J

    [SOLVED] Revert to prior Kernel

    List the kernels: # proxmox-boot-tool kernel list 6.5.13-6-pve 6.8.12-11-pve 6.8.12-8-pve 6.8.12-9-pve Ping the desired kernel: proxmox-boot-tool kernel pin 6.8.12-8-pve --next-boot In your case it would be, if I read correctly, 5.11.22-7-pve. Reboot. Uninstall the kernels you don't want...
  9. J

    e1000e eno1: Detected Hardware Unit Hang:

    Thanks for reporting this. Good to know!
  10. J

    e1000e eno1: Detected Hardware Unit Hang:

    Thanks a lot for your report, @leiwang15 . Much appreciated. Incidentally, I found this thread on these forums regarding E1000E vs VirtIO on Proxmox.
  11. J

    e1000e eno1: Detected Hardware Unit Hang:

    Hi, Fabian. Thanks for the link to the bug tracker. I have contributed to it. I hope it can be solved soon.
  12. J

    How to update public ssh keys for proxmox nodes

    This is what I do to remove a node (e.g., proxmox5) from the cluster (Proxmox 7.4-19) after the pvecm delnode proxmox5 command has finished its execution at another node (e.g., proxmox1) and the deleted node has been shut down: rm --recursive --force /etc/pve/nodes/proxmox5 sed -i.bak...
  13. J

    Ping with unprivileged user in LXC container / Linux capabilities

    I have come up with these two solutions for my plays/provision.yml Ansible playbook, that provisions LXC in the Proxmox cluster: Reinstall all packages containing the setcap command: apt-get --reinstall install iproute2 iputils-ping libcap2-bin. Fix permissions of the affected binary: setcap...
  14. J

    I reinstalled a node in the cluster and now the cluster is messy

    I followed the same procedure when re-adding a re-installed node and the first time it went fine (pvecm updatecerts did the job). But the second time I had to re-add the same node, after re-installing it again due to hardware issues, it did not. So I had to go through all the...
  15. J

    e1000e eno1: Detected Hardware Unit Hang:

    I presume that you executed the command in the PVE node, not inside the VM, correct? Moreover, what other guests did you have in the node? I have a mix of VMs and LXCs and this behaviour only happens where I have VMs. Would that be your case, too?
  16. J

    VM created after upgrade to Proxmox 8.1 can't reach other VMs

    In my case, it turned out that the firewall had gone crazy (for whatever reason). Restarting the firewall in the destination node (the one holding the VM that could not be contacted by the other VM) solved the problem. Kind of weird, to say the least.
  17. J

    [SOLVED] VM cannot access another VM in Proxmox 7.4

    For future reference, after hours of log-checking and dealing with firewall rules, network addresses, and all sort of tests using nmap, ping, traceroute, and more, it turned out that the firewall on the node where the destination VM was had turned crazy. Fortunately, the solution was quick and...
  18. J

    [SOLVED] VM cannot access another VM in Proxmox 7.4

    Hey everyone! I have a Proxmox 7.4 cluster with several nodes. Across them, there are two VMs, live and test, both based on Ubuntu 18.04, both with a private IP address for communication among LXCs and VMs, and with a public IP address to access the Internet. Firewall is open for specific ports...
  19. J

    VM created after upgrade to Proxmox 8.1 can't reach other VMs

    Hi, @Veidit! I am facing a similar issue in which two VMs running Ubuntu 18.04 were migrated (via PBS) from a Proxmox 6 cluster to a Proxmox 7 cluster. The first can connect to the second, but the second cannot connect to the first. Both can be connected from a LXC I use as Ansible Controller...
  20. J

    a pve server rebooted - any insight?

    Hi, @chris. I have been having random reboots of the two nodes hosting my big PostgreSQL database and my big MongoDB database, respectively. Both on LXC running Debian 11 on Proxmox 7.4. I have always suspected it was the heavy (disk?) load of either, but I have never been able to prove it. I...