I suggest making a new thread and then posting the link here. I'll try to take a look at your setup. I suggest showing the full details of your current configuration, including:
grub command
current kernel
vfio module options
any blacklisted modules
dmesg output from boot onward, and tag where...
So with all my testing I have confirmed that both the old method and the new method work, as long as you don't use q35-7.1 .
Old Method - Blacklist drivers, assign PCI IDs to vfio, add a bunch of kernel options, cross your fingers. This is what most blog posts and even the ProxMox wiki show...
I just changed the host from q35-7.1 to q35-7.0 and it suddenly started working. I will play with some more settings later and report what configurations work in case it helps someone else in the future.
I was previously using Proxmox 6.1 and passing through my RX480 to a windows guest. It was working smoothly, except for the issue of unexpected guest shutdowns making the GPU unusable until the system did a full power cycle.
I updated to Proxmox 7.3 and the windows guest stopped working. First...
@RokaKen - thank you, this is helpful. Is there any documentation to explain how I can track if the calls to increase RAM are being taken from released RAM vs. the swap available on the host? Also, is it possible to add this as a snippet in the "automatic memory management" section of the Wiki...
@Dunuin - thanks for the idea. I took a look and while the 28 GB ballooned machine is running, the host shows this:
root@proxmox:~# free -m
total used free shared buff/cache available
Mem: 32138 31088 342 42 707...
I have a Proxmox 6.0-4 host with 32 GB of memory. I have two guest machines, each that will do work and I want to be able to share the RAM between them.
Neither machine will be using 100% of the RAM at a time. So my goal is to configure each one with a Max of 28 GB of RAM, and the minimum of 4...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.