Hi,
I'm on a 7840HS and I wanted to passthrough my igpu to a linux guest. I had it working on a windows guest with the same conf (but gpu accel didn't seem to work, I didn't test it thoroughly), but for some reason it does not work on a linux guest. The errors are in the github discussion link...
I just connect my minipc to the router itself lol
don't have a gigantic setup yet so it's fine
I'm currently working on igpu passthrough (of the 780M) to a linux guest to try ROCm.
current bios is v1.09 (latest @ https://www.minisforum.com/new/support?lang=en#/support/page/download/79), but there have been known to be BIOS issues in the past for this chipset (not just this particular minipc model).
I looked up the microcode files to make sure there are no updates:
# grep...
So in /etc/pve/virtual-guest/cpu-models.conf did you try:
cpu-model: 7840HS
flags -arch_capabilities;-hypervisor;-tsc_adjust;-tsc_deadline_timer;-tsc_known_freq
phys-bits host
hv-vendor-id proxmox
reported-model host
It should complain that some options aren't found or whatever...
No. It looks like the previous message is "This message is awaiting moderator approval, and is invisible to normal visitors." for some reason, so let me try to put it here again:
in /etc/pve/virtual-guest/cpu-models.conf
cpu-model: 7840HS
flags -hypervisor;-ssbd;-tsc_adjust;-wbnoinvd...
UM790 = UM780, only difference being 790 has a 7940HS and a 780 has a 7840HS. Drivers, BIOS, almost everything is the same. 7940HS has an unlocked CPU multiplier so you can overclock it, but because the minipcs are limited by thermals/power (TDP) anyway, they perform almost identically to the...
Hi, in your /etc/pve/virtual-guest/cpu-models.conf try:
cpu-model: 7840HS
flags -hypervisor;-ssbd;-tsc_adjust;-wbnoinvd
phys-bits host
hv-vendor-id proxmox
reported-model host
you can replace 7840HS with 7940HS, they are the same minus binning differences.
also in VM set cpu...
How do I generate a cpu-models.conf that is equivalent or as close as possible to what setting cputype: host in the UI would give me? That way I can test each flag one by one (or binary search for the flag that is causing the problem). For example, svm, among other flags (supported in host) is...
That's possible, but when I loaded all the tabs bare metal on windows 11, there was no problem. There have previously been reports of UM790 models crashing at either low power states or under stress, but a new motherboard revision was released to fix those issues and I supposedly have the new...
haven't yet had a chance to test everything yet but "max" crashes the host. Looks like anything with dynamic detection of CPU capabilities/features breaks it for some reason. Is this something I should report upstream (i.e. to QEMU)?
I did search for "Unhandled WRMSR" errors, the forums told me to ignore it! ;_;
I don't think it's a PSU issue, I stress tested with s-tui for 10-15 mins @ 60-65W TDP and there were no stability issues although I can see how that wouldn't necessarily rule it out
I ran it in x86-64-v2-AES, no...
Hi, it works! Thank you for the reply!
I switched the cpu model from host to kvm64 like you suggested and I am now able to load all 400 of my tabs upon resume.
Do you happen to know why? I know the 7840HS chipset is very new.
Maybe kernels are not yet ready for the 7840/7940H/S/X chipset...
A 2nd journalctl after a crash on the new VM:
root@SnuUM780:~# journalctl -xeb-1
Jan 30 05:30:32 SnuUM780 kernel: fwln101i0: entered allmulticast mode
Jan 30 05:30:32 SnuUM780 kernel: fwln101i0: entered promiscuous mode
Jan 30 05:30:32 SnuUM780 kernel: fwbr101i0: port 1(fwln101i0) entered...
Edits:
Edit 1: I tried installing a new Windows VM following https://pve.proxmox.com/wiki/Windows_10_guest_best_practices and loaded a bunch of tabs in librewolf. No problem, no crashing. It now crashes! I tried removing the network driver and reinstalling it in my old (baremetal -> proxmox)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.