I believe I only got this error with kernel version after (not including) 6.8 (doesn't seem to matter the driver version, official supported 16.11, or older "patched" 16.x drivers, or 17.x (patched/unpatched).
I have yet to try patching...
# cat blacklist.conf
blacklist nouveau
options nouveau modeset=0
I have it blacklisted on all my hosts. I don't remember why I have `options nouveau modeset=0` there.
Do you not have the virtio drivers installed on the guest?
Also, what CPU type is assigned to the VM? There has been discussion about not using `host` with a windows guest.
I have 2 VMs that I pass an instance to. Both are Debian 13 hosts with docker installed (and the nvidia container toolkit). It is then passed to the docker containers.
One is a codeproject.ai image (codeproject/ai-server:cuda12_2-2.9.7)
The...
I haven't seen any kernel panics, but I am only passing them to Linux VMs and only using the CUDA and video encoding stuff.
I am not using it as an actual display adapter in the VM. Maybe that makes a difference.
This might not matter or help at all...
When you are booted into the previous kernel, you say your pool is working? If so, then do a `zpool status pool` (or whatever the name of the pool is).
If it has sda and sdc listed, then I would export...
I use the GPU for a couple of VMs, so I have no choice but to use the vGPU drivers.
But yes, the 16.11 drivers will compile just fine with a 6.14 kernel. I only get the above error after the VM starts up. I assume whenever the driver inside...
I went ahead and updated to PVE 9 even though I knew it came with 6.14 and I had this problem and posted in a different thread. Back then I was using patched drivers (I'm using a Tesla P4) because the 16.9 version of the drivers didn't support...