Search results

  1. J

    Nested Virtualization stopped working [PVE 6.1-5]

    Sorry, I have no clue then. try booting an older kernel ?
  2. J

    Nested Virtualization stopped working [PVE 6.1-5]

    https://pve.proxmox.com/wiki/Nested_Virtualization About half way down it says to either reload the module or reboot,
  3. J

    Nested Virtualization stopped working [PVE 6.1-5]

    Wait is that message from inside the level 1 virtual ? i.e. the Proxmox in the proxmox ?
  4. J

    Nested Virtualization stopped working [PVE 6.1-5]

    Hmmm silly question, but the conf file is still there and the modules are loaded ?
  5. J

    Nested Virtualization stopped working [PVE 6.1-5]

    Just updated and it now seems to work fine for me. proxmox-ve: 6.1-2 (running kernel: 5.3.13-2-pve) pve-manager: 6.1-5 (running version: 6.1-5/9bf06119) pve-kernel-5.3: 6.1-2 pve-kernel-helper: 6.1-2 pve-kernel-4.15: 5.4-12 pve-kernel-5.3.13-2-pve: 5.3.13-2 pve-kernel-5.3.13-1-pve: 5.3.13-1...
  6. J

    Nested Virtualization stopped working [PVE 6.1-5]

    Never tested before (so can't confirm it used to work on my intel rig), but i get the same issue was well
  7. J

    Kernel Oops with kworker getting tainted.

    Your right, looking at it it's kernel 4.14 and 4.19. I can't confirm this NFS bit is the fix as we haven't moved the containers back and the above mentioned microcode updates in he BIOS.
  8. J

    Kernel Oops with kworker getting tainted.

    Yes , the article points to an issue in version 4.0 and maybe 4.1 My node that hasn't crashed is mounted NFS using version 3.
  9. J

    Kernel Oops with kworker getting tainted.

    So at the end of the Rabbit hole i think i found this. https://about.gitlab.com/blog/2018/11/14/how-we-spent-two-weeks-hunting-an-nfs-bug/ And i checked the output of mount -v | grep and sure enough Node1 had V4 NFS mount, and Node2 had it mounted as V3. I've changed both to use V3 but have not...
  10. J

    Kernel Oops with kworker getting tainted.

    So a little bit of hardware info. DL380 Gen9 The crashed system was System ROM P89 v2.60 (05/21/2018) System ROM Date 05/21/2018 E5-2620 v3 with microcode loaded 2019-03-01 Tho we have been running fine for 2 days on a older firmware node. System ROM P89 v2.52 (10/25/2017) System...
  11. J

    Kernel Oops with kworker getting tainted.

    We mount our backup location via NFS, so we get quite a spike in traffic. I didn't boot on an older kernel as he jump between the two kernel versions was huge. Got ours running on a node with older HP firmware and so far it seems ok.
  12. J

    Kernel Oops with kworker getting tainted.

    Strange, we had a simmar issue with our node, gonna reboot tonight. The console and web management is unresponsive, but SSH works very slow. Ours are HP Gen9 with quite old hp firmware. Containers seem to work tho...
  13. J

    RDP into VMs from external IP

    Never looked into haproxy, used squid on pfsense, but http and https are very different beasts from RDP
  14. J

    RDP into VMs from external IP

    Nice simple option is just publish RDP on two diffrent ports, and have each port forward to a different machine, or change the port RDP is listening on on one machine (in case of nat loop back working on your router, tho i sound like it is from above) ###WARNING### RDP is not really considered...
  15. J

    VM Failure to boot with more than 8 PCI Express Passthrough GPUs, Only UEFI Shell

    I think Linus Tech Tips had this issue and had to enable "above 4G decoding" in the BIOS of the host machine... I think .
  16. J

    Proxmox 6 + Pci-passthrough = Reports high memory usage in pve-manager

    As i side note, i see your machine name is TVHeadEnd. I have huge stutter issues with my PCIe FreeSat card when passing it through to a virtual (it maybe been PCI lanes being congested) but it seems rock solid in a container with access to the /dev/dvb
  17. J

    Upgrade 3 nodes from 5.4 (with corosync 2) to 6.x

    Awesome sauce, I've seen that migrating from old to new hosts will work, but the other way 'should' work. Thanks for your help
  18. J

    Upgrade 3 nodes from 5.4 (with corosync 2) to 6.x

    Would node 1 (still on 5.4 but corosync 3) still be part of the HA cluster so i can quickly migrate to node2 and node3 (and also test the newer kernal works with my containers)
  19. J

    Upgrade 3 nodes from 5.4 (with corosync 2) to 6.x

    I'm following the guide PVE 5 to 6 but i wanted to check that after i have stopped all the HA services and performed the corosync upgrade and then upgraded all the nodes other packages. Will my containers all still be running fine on the node i left them on ? or will i need to reboot each of the...
  20. J

    windows install stuck in collecting information. nvidia gpu passthrough

    I take it you've done a clean boot of the host, since the last try ? as the card might need to be reset gracefully. further than that i'm out of options, having not one this on my box for years now.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!