So a small improvement seems to be adding "nomodeset i915.force_probe=4680" to the guest CachyOS systemd-boot system. Now the above i915 kernel crash has dissapeared!
My Minisforum MS-01 finally arrived after 4 months so I've spent nearly 24 hous trying to see if I can get iGPU passthrough to work, no luck so far. I'll detail my settings and results so far and see if anyone can comment. Hopefully this thread will end up with a working example that can apply...
Thanks to spirit, I managed to get this EVPN-VXLAN system to work so here is a how-I-did-it.
Assuming a router with a LAN IP of 192.168.1.1 and 3x Proxmox VE host nodes as pve1 (192.168.1.21), pve2 (192.168.1.22) and pve3 (192.168.1.23) including a number of VM/CTs in an internal 10.1.1.0/24...
Thanks yet again. I removed my initial tests and started from scratch again following your guide, with a few other google hints, and I have something that is working as I expect BUT only for ICMP. I can ping east-west and north-south (well, north at least) successfully but dig (UDP) and curl...
Much appreciate your help. I got this to work between 2x CTs on each of 3 nodes. Excellent. But it doesn't seem possible to access anything outside of the single defined vxlan network. It seems that the next step is needed for that.
Okay so I now have two evpn vnets set up with two /24 networks...
I have a small 3 node cluster with vmbr0 allocated 192.168.1.0/24 host node IPs. From that LAN network I can reach all internal VM/CTs on each of the 3 host nodes. Fine. Now I have 2 OpenWrt CTs on two different nodes with a pair of Debian CTs also on each of those two nodes. I added a "blank"...
So you have a similar CPU and iGPU and couldn't get it to work either. That's not very encouraging. There seems to be some extra IOMMU and SRV-IO in the 6.8 kernel so hopefully Proxmox will release an update or patched kernel soon and we can try yet again. I'm not sure of the practical...
Just to be clear, when I add a PCI Device for the iGPU and start this Linux based guest VM, the HOST kernel crashes and takes down all VM/CTs. If this VM has autostart turned on, then it's just an endless cycle of death. When I remove that PCI Device and just rely on SPICE then all is well. Here...
PVE 8.1.3 host on a Minisforum UM780 XTX (AMD Ryzen 7 7840HS w/ Radeon 780M Graphics) with a fairly standard iGPU passthrough setup. The guest is Manjaro/KDE w/ BIOS OVMF, Display none, Machine q35, hostpci 0000:c5:00.0,pcie=1 (+ rombar and all functions). If I enable Display SPICE and disable...
Interesting, I've never managed to get any kind of GPU passthrough to work to even know what SRIOV has to offer. I googled it and found this page which lists a few NUC models that could be potential iGPU passthrough hardware targets... I obviously have a lot to learn...
@Xclsd Thanks for the heads-up on that beast of a mobo working with Ubuntu. I actually want to use a linux desktop myself, so that is encouraging. I have been mainly testing with a Win10 image because I thought that would be the most likely to work. I will go through the procedure again with an...
If anyone else follows this thread, could they please post the exact make and model of whatever (minipc?) device that they have successfully applied iGPU passthrough to?
I've got a Minisforum HM90 (AMD) and have had no luck, trying every few months after a PVE update. I figure if "we" compare...
I'm the OP and I can't remember specific details, but FWIW this page has a bunch of good hints...
https://blog.konpat.me/dev/2019/03/11/setting-up-lxc-for-intel-gpu-proxmox.html
I have a subnet of IPs that I allocate to a dozen different domains, and right now I use a combination of master.cf rules and sender_dependent_default_transport_maps = lmdb:/etc/postfix/sender_transport to send out mail from different customer virtual domains via different IPs. If I put PMG in...
Thank you Neobin, this was the procedure I used...
cat /etc/kernel/proxmox-boot-uuids
67D6-E50C
ls -al /dev/disk/by-uuid/67D6-E50C
lrwxrwxrwx 1 root root 15 Sep 1 20:06 /dev/disk/by-uuid/67D6-E50C -> ../../nvme0n1p2
mkdir /tmp/myesp
mount /dev/nvme0n1p2 /tmp/myesp
ll /tmp/myesp/*/*
rm -rf...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.