Recent content by markc

  1. M

    MS-01 i9-13900H Xe iGPU Full Passthrouh

    So a small improvement seems to be adding "nomodeset i915.force_probe=4680" to the guest CachyOS systemd-boot system. Now the above i915 kernel crash has dissapeared!
  2. M

    MS-01 i9-13900H Xe iGPU Full Passthrouh

    My Minisforum MS-01 finally arrived after 4 months so I've spent nearly 24 hous trying to see if I can get iGPU passthrough to work, no luck so far. I'll detail my settings and results so far and see if anyone can comment. Hopefully this thread will end up with a working example that can apply...
  3. M

    [SOLVED] Inter node SDN networking using EVPN-VXLAN

    Thanks to spirit, I managed to get this EVPN-VXLAN system to work so here is a how-I-did-it. Assuming a router with a LAN IP of 192.168.1.1 and 3x Proxmox VE host nodes as pve1 (192.168.1.21), pve2 (192.168.1.22) and pve3 (192.168.1.23) including a number of VM/CTs in an internal 10.1.1.0/24...
  4. M

    [SOLVED] Inter node SDN networking using EVPN-VXLAN

    Thanks yet again. I removed my initial tests and started from scratch again following your guide, with a few other google hints, and I have something that is working as I expect BUT only for ICMP. I can ping east-west and north-south (well, north at least) successfully but dig (UDP) and curl...
  5. M

    [SOLVED] Inter node SDN networking using EVPN-VXLAN

    Much appreciate your help. I got this to work between 2x CTs on each of 3 nodes. Excellent. But it doesn't seem possible to access anything outside of the single defined vxlan network. It seems that the next step is needed for that. Okay so I now have two evpn vnets set up with two /24 networks...
  6. M

    [SOLVED] Inter node SDN networking using EVPN-VXLAN

    I have a small 3 node cluster with vmbr0 allocated 192.168.1.0/24 host node IPs. From that LAN network I can reach all internal VM/CTs on each of the 3 host nodes. Fine. Now I have 2 OpenWrt CTs on two different nodes with a pair of Debian CTs also on each of those two nodes. I added a "blank"...
  7. M

    iGPU passthrough attempt crashes host kernel

    So you have a similar CPU and iGPU and couldn't get it to work either. That's not very encouraging. There seems to be some extra IOMMU and SRV-IO in the 6.8 kernel so hopefully Proxmox will release an update or patched kernel soon and we can try yet again. I'm not sure of the practical...
  8. M

    iGPU passthrough attempt crashes host kernel

    Just to be clear, when I add a PCI Device for the iGPU and start this Linux based guest VM, the HOST kernel crashes and takes down all VM/CTs. If this VM has autostart turned on, then it's just an endless cycle of death. When I remove that PCI Device and just rely on SPICE then all is well. Here...
  9. M

    iGPU passthrough attempt crashes host kernel

    PVE 8.1.3 host on a Minisforum UM780 XTX (AMD Ryzen 7 7840HS w/ Radeon 780M Graphics) with a fairly standard iGPU passthrough setup. The guest is Manjaro/KDE w/ BIOS OVMF, Display none, Machine q35, hostpci 0000:c5:00.0,pcie=1 (+ rombar and all functions). If I enable Display SPICE and disable...
  10. M

    Intel 13gen or 14gen iGPU full passthrough to Win?

    Interesting, I've never managed to get any kind of GPU passthrough to work to even know what SRIOV has to offer. I googled it and found this page which lists a few NUC models that could be potential iGPU passthrough hardware targets... I obviously have a lot to learn...
  11. M

    Intel 13gen or 14gen iGPU full passthrough to Win?

    @Xclsd Thanks for the heads-up on that beast of a mobo working with Ubuntu. I actually want to use a linux desktop myself, so that is encouraging. I have been mainly testing with a Win10 image because I thought that would be the most likely to work. I will go through the procedure again with an...
  12. M

    Intel 13gen or 14gen iGPU full passthrough to Win?

    If anyone else follows this thread, could they please post the exact make and model of whatever (minipc?) device that they have successfully applied iGPU passthrough to? I've got a Minisforum HM90 (AMD) and have had no luck, trying every few months after a PVE update. I figure if "we" compare...
  13. M

    LXC Desktop

    I'm the OP and I can't remember specific details, but FWIW this page has a bunch of good hints... https://blog.konpat.me/dev/2019/03/11/setting-up-lxc-for-intel-gpu-proxmox.html
  14. M

    Routing outgoing mail via multiple IPs

    I have a subnet of IPs that I allocate to a dozen different domains, and right now I use a combination of master.cf rules and sender_dependent_default_transport_maps = lmdb:/etc/postfix/sender_transport to send out mail from different customer virtual domains via different IPs. If I put PMG in...
  15. M

    [SOLVED] Kernel upgrade gets No space left on device

    Thank you Neobin, this was the procedure I used... cat /etc/kernel/proxmox-boot-uuids 67D6-E50C ls -al /dev/disk/by-uuid/67D6-E50C lrwxrwxrwx 1 root root 15 Sep 1 20:06 /dev/disk/by-uuid/67D6-E50C -> ../../nvme0n1p2 mkdir /tmp/myesp mount /dev/nvme0n1p2 /tmp/myesp ll /tmp/myesp/*/* rm -rf...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!