I changed the filter to:
global_filter=["r|WDC|", "r|TOSHIBA|", "r|/dev/zd.*|", "r|/dev/mapper/.*|"]
then it boots without issue, so this is a viable workaround. But I guess that's still a bug in lvm?
I have a custom global_filter set in lvm.conf like this:
global_filter=["a|M4-CT064M4SSD2|", "r|.*|"]
Basically I have quite a lot of physical disks, and some of them were spun down most of the time, I don't want to accidentally wake them up by lvm, and white listing the only disk I have lvm on...
Sorry to revive an old thread but I'd like to report vendor-reset did solve the code 43 issue with Vega56 here.
One thing I'd like to add is that I didn't succeed in the 1st try, then I noticed this quote from vendor-reset:
I had the vm set to start on boot, so it's already non-recoverable...
I have two VMs, each with a 1070 pass-through, upgrading from 5.4 to 6 0 broke it, I've tried both workarounds, kernel_ireqchip=on or pc-q35-3.1, none of them works, it crashes as soon as the GPU driver gets loaded.
update: I upgraded to 6.0-7 and pc-q35-3.1 works now.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.