how to usb3.0 and gpu passthrough hackintosh big sur???

Did you active vendor-reset? Does it work now with the Ubuntu live installer? Do you see reset messages in journalctl?

i think i activated it, modprobe vendor-reset, right?

or echo device_specific >'/sys/bus/pci/devices/0000:03:00.0/reset_method' ??
i did enter that command and nothing shows up

it doesn't work with ubuntu either

the only messages i see with journalctl | grep reset
Jun 19 20:29:21 pve kernel: vendor_reset: loading out-of-tree module taints kernel.
Jun 19 20:29:22 pve systemd-modules-load[449]: Inserted module 'vendor_reset'
Jun 19 20:29:22 pve kernel: vendor_reset_hook: installed


my conclusion
i think we are still in square 1 because my GPU is not in a dedicated group hahaha...
it is better if i have a board with several PCIe slot...
 
Last edited:
i think i activated it, modprobe vendor-reset, right?
No that just loads the module, which is also done by /etc/modules (if vendor_reset is in there).
or echo device_specific >'/sys/bus/pci/devices/0000:03:00.0/reset_method' ??
i did enter that command and nothing shows up
Nothing is good. That means device_specific is activated for that GPU.
it doesn't work with ubuntu either
What does the journalctl look like from starting the Ubuntu VM? What kind of messages and errors do you get? Please don't just grep, scroll to the time of starting the VM with vendor-reset activated for the GPU.
the only messages i see with journalctl | grep reset
Jun 19 20:29:21 pve kernel: vendor_reset: loading out-of-tree module taints kernel.
Jun 19 20:29:22 pve systemd-modules-load[449]: Inserted module 'vendor_reset'
Jun 19 20:29:22 pve kernel: vendor_reset_hook: installed
I don't see the expected messages from vendor-reset resetting your NAVI 14 GPU. Maybe it was not activated or you did not start the VM yet (which triggers the reset).
my conclusion
i think we are still in square 1 because my GPU is not in a dedicated group hahaha...
What makes you think the GPU is not isolated in the IOMMU group? The GPU functions (and some PCI Bridges but you don't need to worry about that) are neatly in group 1.
The problem here is that you are having a difficult time getting vendor-reset activated for your GPU and the GPU will not reset properly without vendor-reset. IOMMU is working fine.
it is better if i have a board with several PCIe slot...
That does not fix the issue that the 5500XT does not reset properly without vendor-reset.
 
I don't see the expected messages from vendor-reset resetting your NAVI 14 GPU. Maybe it was not activated or you did not start the VM yet (which triggers the reset).

i think this is the message that you are looking for

Jun 20 00:09:59 pve pvedaemon[3788]: start VM 100: UPID:pve:00000ECC:0001D37B:64908BE7:qmstart:100:root@pam:
Jun 20 00:09:59 pve pvedaemon[991]: <root@pam> starting task UPID:pve:00000ECC:0001D37B:64908BE7:qmstart:100:root@pam:
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: version 1.1
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing pre-reset
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing reset
Jun 20 00:09:59 pve kernel: ATOM BIOS: xxx-xxx-xxx
Jun 20 00:09:59 pve kernel: vendor-reset-drm: atomfirmware: bios_scratch_reg_offset initialized to 4c
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: bus reset disabled? yes
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: SMU response reg: 0, sol reg: 0, mp1 intr enabled? no,>
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing post-reset
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: reset result = 0

correct?

Jun 20 00:09:59 pve pvedaemon[3788]: start VM 100: UPID:pve:00000ECC:0001D37B:64908BE7:qmstart:100:root@pam:
Jun 20 00:09:59 pve pvedaemon[991]: <root@pam> starting task UPID:pve:00000ECC:0001D37B:64908BE7:qmstart:100:root@pam:
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: version 1.1
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing pre-reset
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing reset
Jun 20 00:09:59 pve kernel: ATOM BIOS: xxx-xxx-xxx
Jun 20 00:09:59 pve kernel: vendor-reset-drm: atomfirmware: bios_scratch_reg_offset initialized to 4c
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: bus reset disabled? yes
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: SMU response reg: 0, sol reg: 0, mp1 intr enabled? no,>
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing post-reset
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: reset result = 0
Jun 20 00:09:59 pve systemd[1]: Created slice qemu.slice.
Jun 20 00:09:59 pve systemd[1]: Started 100.scope.
Jun 20 00:09:59 pve systemd-udevd[3805]: Using default interface naming scheme 'v247'.
Jun 20 00:09:59 pve systemd-udevd[3805]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not w>
Jun 20 00:09:59 pve kernel: device tap100i0 entered promiscuous mode
Jun 20 00:09:59 pve systemd-udevd[3805]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not w>
Jun 20 00:09:59 pve systemd-udevd[3805]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not w>
Jun 20 00:09:59 pve systemd-udevd[3808]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not w>
Jun 20 00:09:59 pve systemd-udevd[3808]: Using default interface naming scheme 'v247'.
Jun 20 00:09:59 pve kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Jun 20 00:09:59 pve kernel: vmbr0: port 2(fwpr100p0) entered disabled state
Jun 20 00:09:59 pve kernel: device fwpr100p0 entered promiscuous mode
Jun 20 00:09:59 pve kernel: vmbr0: port 2(fwpr100p0) entered blocking state
Jun 20 00:09:59 pve kernel: vmbr0: port 2(fwpr100p0) entered forwarding state
Jun 20 00:09:59 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Jun 20 00:09:59 pve kernel: fwbr100i0: port 1(fwln100i0) entered disabled state
Jun 20 00:09:59 pve kernel: device fwln100i0 entered promiscuous mode
Jun 20 00:09:59 pve kernel: fwbr100i0: port 1(fwln100i0) entered blocking state
Jun 20 00:09:59 pve kernel: fwbr100i0: port 1(fwln100i0) entered forwarding state
Jun 20 00:09:59 pve kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Jun 20 00:09:59 pve kernel: fwbr100i0: port 2(tap100i0) entered disabled state
Jun 20 00:09:59 pve kernel: fwbr100i0: port 2(tap100i0) entered blocking state
Jun 20 00:09:59 pve kernel: fwbr100i0: port 2(tap100i0) entered forwarding state
Jun 20 00:10:09 pve pvedaemon[993]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - u>
Jun 20 00:10:13 pve pvedaemon[991]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - u>
Jun 20 00:10:16 pve pvestatd[961]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - un>
Jun 20 00:10:27 pve pveproxy[2443]: detected empty handle
Jun 20 00:10:27 pve pve-firewall[962]: firewall update time (15.067 seconds)
Jun 20 00:10:27 pve pveproxy[2443]: proxy detected vanished client connection
Jun 20 00:10:27 pve pveproxy[2443]: detected empty handle
Jun 20 00:10:27 pve pveproxy[2444]: detected empty handle
Jun 20 00:10:27 pve pveproxy[2443]: detected empty handle
Jun 20 00:10:27 pve pvestatd[961]: status update time (24.107 seconds)
Jun 20 00:10:29 pve pvedaemon[3957]: starting termproxy UPID:pve:00000F75:0001DF21:64908C04:vncshell::root@pam:
Jun 20 00:10:29 pve pvedaemon[992]: <root@pam> starting task UPID:pve:00000F75:0001DF21:64908C04:vncshell::root@pam:
Jun 20 00:10:29 pve pvedaemon[3788]: start failed: command '/usr/bin/kvm -id 100 -name '11.BigSur,debug-threads=on' ->
Jun 20 00:10:30 pve pvedaemon[991]: <root@pam> end task UPID:pve:00000ECC:0001D37B:64908BE7:qmstart:100:root@pam: sta>
Jun 20 00:10:31 pve pvedaemon[992]: <root@pam> successful auth for user 'root@pam'
Jun 20 00:10:31 pve login[3960]: pam_unix(login:session): session opened for user root(uid=0) by root(uid=0)
Jun 20 00:10:31 pve systemd[1]: Created slice User Slice of UID 0.
Jun 20 00:10:31 pve systemd[1]: Starting User Runtime Directory /run/user/0...
Jun 20 00:10:31 pve systemd-logind[652]: New session 7 of user root.
Jun 20 00:10:31 pve systemd[1]: Finished User Runtime Directory /run/user/0.
Jun 20 00:10:31 pve systemd[1]: Starting User Manager for UID 0...
Jun 20 00:10:31 pve systemd[3966]: pam_unix(systemd-user:session): session opened for user root(uid=0) by (uid=0)
Jun 20 00:10:31 pve systemd[3966]: Queued start job for default target Main User Target.
Jun 20 00:10:31 pve systemd[3966]: Created slice User Application Slice.
Jun 20 00:10:31 pve systemd[3966]: Reached target Paths.
Jun 20 00:10:31 pve systemd[3966]: Reached target Timers.
Jun 20 00:10:31 pve systemd[3966]: Listening on GnuPG network certificate management daemon.
Jun 20 00:10:31 pve systemd[3966]: Listening on GnuPG cryptographic agent and passphrase cache (access for web browse>
Jun 20 00:10:31 pve systemd[3966]: Listening on GnuPG cryptographic agent and passphrase cache (restricted).
Jun 20 00:10:31 pve systemd[3966]: Listening on GnuPG cryptographic agent (ssh-agent emulation).
Jun 20 00:10:31 pve systemd[3966]: Listening on GnuPG cryptographic agent and passphrase cache.
Jun 20 00:10:31 pve systemd[3966]: Reached target Sockets.
Jun 20 00:10:31 pve systemd[3966]: Reached target Basic System.
Jun 20 00:10:31 pve systemd[3966]: Reached target Main User Target.
Jun 20 00:10:31 pve systemd[3966]: Startup finished in 125ms.
Jun 20 00:10:31 pve systemd[1]: Started User Manager for UID 0.
Jun 20 00:10:31 pve systemd[1]: Started Session 7 of user root.
Jun 20 00:10:31 pve login[3981]: ROOT LOGIN on '/dev/pts/0'
Jun 20 00:10:36 pve pvestatd[961]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - un>
Jun 20 00:10:37 pve pvestatd[961]: status update time (9.403 seconds)
Jun 20 00:10:51 pve pvestatd[961]: VM 100 qmp command failed - VM 100 qmp command 'query-proxmox-support' failed - un>
Jun 20 00:11:09 pve pve-firewall[962]: firewall update time (20.916 seconds)
Jun 20 00:11:09 pve pvestatd[961]: status update time (31.681 seconds)
Jun 20 00:11:42 pve pveproxy[2443]: detected empty handle
 
Last edited:
i think this is the message that you are looking for

Jun 20 00:09:59 pve pvedaemon[3788]: start VM 100: UPID:pve:00000ECC:0001D37B:64908BE7:qmstart:100:root@pam:
Jun 20 00:09:59 pve pvedaemon[991]: <root@pam> starting task UPID:pve:00000ECC:0001D37B:64908BE7:qmstart:100:root@pam:
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: version 1.1
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing pre-reset
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing reset
Jun 20 00:09:59 pve kernel: ATOM BIOS: xxx-xxx-xxx
Jun 20 00:09:59 pve kernel: vendor-reset-drm: atomfirmware: bios_scratch_reg_offset initialized to 4c
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: bus reset disabled? yes
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: SMU response reg: 0, sol reg: 0, mp1 intr enabled? no,>
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing post-reset
Jun 20 00:09:59 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: reset result = 0

correct?
Yes, it looks like vendor-reset is working. What is the VM configuration file you use with the Ubuntu live installer? Do you see output on the physical display? Are there any errors in journalctl that might explain why Ubuntu live install is not working?
EDIT: Do you blacklist amdpgu or early bind the GPU to vfio-pci or not? Asked differently, what did you add or change in conf-files in /etc/modprobe.d/?
 
Last edited:
Yes, it looks like vendor-reset is working. What is the VM configuration file you use with the Ubuntu live installer? Do you see output on the physical display? Are there any errors in journalctl that might explain why Ubuntu live install is not working?
EDIT: Do you blacklist amdpgu or early bind the GPU to vfio-pci or not? Asked differently, what did you add or change in conf-files in /etc/modprobe.d/?

the previous message is for node 100, hackintosh VM
Jun 20 00:45:34 pve pvedaemon[1292]: start VM 102: UPID:pve:0000050C:00002EB0:6490943E:qmstart:102:root@pam:
Jun 20 00:45:34 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: version 1.1
Jun 20 00:45:34 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing pre-reset
Jun 20 00:45:34 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing reset
Jun 20 00:45:35 pve kernel: ATOM BIOS: xxx-xxx-xxx
Jun 20 00:45:35 pve kernel: vendor-reset-drm: atomfirmware: bios_scratch_reg_offset initialized to 4c
Jun 20 00:45:35 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: bus reset disabled? yes
Jun 20 00:45:35 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: SMU response reg: 0, sol reg: 0, mp1 intr enabled? no, bl ready? yes
Jun 20 00:45:35 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: performing post-reset
Jun 20 00:45:35 pve kernel: vfio-pci 0000:03:00.0: AMD_NAVI14: reset result = 0
Jun 20 00:45:35 pve systemd[1]: Created slice qemu.slice.
Jun 20 00:45:35 pve systemd[1]: Started 102.scope.
Jun 20 00:45:35 pve systemd-udevd[1309]: Using default interface naming scheme 'v247'.
Jun 20 00:45:35 pve systemd-udevd[1309]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 20 00:45:35 pve kernel: device tap102i0 entered promiscuous mode
Jun 20 00:45:35 pve systemd-udevd[1309]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 20 00:45:35 pve systemd-udevd[1309]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 20 00:45:35 pve systemd-udevd[1312]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Jun 20 00:45:35 pve systemd-udevd[1312]: Using default interface naming scheme 'v247'.
Jun 20 00:45:35 pve kernel: vmbr0: port 2(fwpr102p0) entered blocking state
Jun 20 00:45:35 pve kernel: vmbr0: port 2(fwpr102p0) entered disabled state
Jun 20 00:45:35 pve kernel: device fwpr102p0 entered promiscuous mode
Jun 20 00:45:35 pve kernel: vmbr0: port 2(fwpr102p0) entered blocking state
Jun 20 00:45:35 pve kernel: vmbr0: port 2(fwpr102p0) entered forwarding state
Jun 20 00:45:35 pve kernel: fwbr102i0: port 1(fwln102i0) entered blocking state
Jun 20 00:45:35 pve kernel: fwbr102i0: port 1(fwln102i0) entered disabled state
Jun 20 00:45:35 pve kernel: device fwln102i0 entered promiscuous mode
Jun 20 00:45:35 pve kernel: fwbr102i0: port 1(fwln102i0) entered blocking state
Jun 20 00:45:35 pve kernel: fwbr102i0: port 1(fwln102i0) entered forwarding state
Jun 20 00:45:35 pve kernel: fwbr102i0: port 2(tap102i0) entered blocking state
Jun 20 00:45:35 pve kernel: fwbr102i0: port 2(tap102i0) entered disabled state
Jun 20 00:45:35 pve kernel: fwbr102i0: port 2(tap102i0) entered blocking state
Jun 20 00:45:35 pve kernel: fwbr102i0: port 2(tap102i0) entered forwarding state
Jun 20 00:45:43 pve pvedaemon[993]: VM 102 qmp command failed - VM 102 qmp command 'query-proxmox-support' failed - unable to connect to VM 102 qmp socket - timeout after 48 retries
Jun 20 00:45:44 pve pvestatd[964]: VM 102 qmp command failed - VM 102 qmp command 'query-proxmox-support' failed - unable to connect to VM 102 qmp socket - timeout after 47 retries
Jun 20 00:45:44 pve pvestatd[964]: status update time (8.904 seconds)
Jun 20 00:45:55 pve pvestatd[964]: VM 102 qmp command failed - VM 102 qmp command 'query-proxmox-support' failed - unable to connect to VM 102 qmp socket - timeout after 2 retries
Jun 20 00:45:55 pve pveproxy[1004]: detected empty handle
Jun 20 00:45:55 pve pveproxy[1002]: detected empty handle
Jun 20 00:45:55 pve pveproxy[1004]: detected empty handle
Jun 20 00:45:55 pve pveproxy[1002]: detected empty handle
Jun 20 00:45:58 pve pvestatd[964]: status update time (12.132 seconds)
Jun 20 00:46:11 pve pvestatd[964]: VM 102 qmp command failed - VM 102 qmp command 'query-proxmox-support' failed - unable to connect to VM 102 qmp socket - timeout after 4 retries
Jun 20 00:46:13 pve pvedaemon[1292]: start failed: command '/usr/bin/kvm -id 102 -name 'ubuntu,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/102.qmp,server=on,wait=off'>
Jun 20 00:46:13 pve pvedaemon[994]: VM 102 qmp command failed - VM 102 qmp command 'query-proxmox-support' failed - unable to connect to VM 102 qmp socket - timeout after 7 retries
Jun 20 00:46:24 pve pveproxy[1003]: detected empty handle
Jun 20 00:46:24 pve pveproxy[1004]: detected empty handle
Jun 20 00:46:25 pve pveproxy[1003]: detected empty handle
Jun 20 00:46:26 pve pveproxy[1004]: detected empty handle
Jun 20 00:46:27 pve pveproxy[1003]: detected empty handle
Jun 20 00:46:28 pve pveproxy[1004]: detected empty handle

VM config file, were you referring to these?
rgs: -device isa-applesmc,osk="ourhardworkbythesewordsguardedpleasedontsteal(c)AppleComputerInc" -smbios type=2 -dev>
bios: ovmf
boot: order=virtio0;net0
cores: 8
cpu: Penryn
efidisk0: local-lvm:vm-100-disk-0,efitype=4m,size=4M
hostpci0: 0000:03:00,pcie=1,x-vga=1
machine: q35
memory: 16384
meta: creation-qemu=7.2.0,ctime=1686922172
name: 11.BigSur
net0: vmxnet3=06:1C:07:1C:98:C7,bridge=vmbr0,firewall=1
numa: 0
ostype: other
parent: test_snapshot
scsihw: virtio-scsi-single
smbios1: uuid=a4ca3031-0d1f-484c-a3a7-84903e53318c
sockets: 1
usb0: host=1-7
usb1: host=1-8
usb2: host=1-9.4
virtio0: local-lvm:vm-100-disk-1,cache=unsafe,iothread=1,size=500G
vmgenid: 5b0318ef-0888-4461-9ab9-6930845f71d5
oot: order=scsi0;net0
cores: 8
hostpci0: 0000:03:00,pcie=1,x-vga=1
machine: q35
memory: 16384
meta: creation-qemu=7.2.0,ctime=1687152027
name: ubuntu
net0: virtio=CA:38:53:C0:62:1B,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-102-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=f04b056c-19cf-492d-bd5c-ad3e6058cd51
sockets: 1
vmgenid: 6ac6a71b-f50b-45de-bb01-82683e992acd

i think they are showing similar error because i only see it from the web interface. every time i add GPU hardware (either to hackintosh / ubuntu), start VM, somehow proxmox crashes. if i refresh the page, it will not connect to proxmox. i have to wait around 4-5minutes to connect to proxmox again...

how to bind GPU to vfio-pci?
inside /etc/modprobe.d folder, there are blacklist.conf dkms.conf iommu_unsafe_interrupts.conf kvm.conf pve-blacklist.conf vfio.conf
blacklist radeon
blacklist nouveau
blacklist nvidia
 
Last edited:
oot: order=scsi0;net0
cores: 8
hostpci0: 0000:03:00,pcie=1,x-vga=1
machine: q35
memory: 16384
meta: creation-qemu=7.2.0,ctime=1687152027
name: ubuntu
net0: virtio=CA:38:53:C0:62:1B,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: local-lvm:vm-102-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=f04b056c-19cf-492d-bd5c-ad3e6058cd51
sockets: 1
vmgenid: 6ac6a71b-f50b-45de-bb01-82683e992acd
For Ubuntu (you did not have to install it, just boot from the installer ISO): don't use Primary GPU because that's for NVidia and set Display to None to make sure the it uses the GPU. 4GB of memory is enough for testing (to rule out memory over-commit), as are 2 cores.
i think they are showing similar error because i only see it from the web interface. every time i add GPU hardware (either to hackintosh / ubuntu), start VM, somehow proxmox crashes. if i refresh the page, it will not connect to proxmox. i have to wait around 4-5minutes to connect to proxmox again...
Please don't start the macOS VM before testing with Ubuntu because that may screw everything up. Please reboot the Proxmox host before each test. I don't
how to bind GPU to vfio-pci?
inside /etc/modprobe.d folder, there are blacklist.conf dkms.conf iommu_unsafe_interrupts.conf kvm.conf pve-blacklist.conf vfio.conf
blacklist radeon
blacklist nouveau
blacklist nvidia
I don't know what you put in the other .conf files, so I'm not sure if there is something wrong there. For Ubuntu, I think you don't need any of those files.
Don't blacklist amdgpu and also don't early bind the GPU to vfio-pci because amdgpu should cleanly unbind the GPU from Proxmox, otherwise you need an additional work-around.
It might help to run echo 0 | tee /sys/class/vtconsole/vtcon*/bind before starting the VM to make sure the switch from amdgpu to vfio-pci works without problems.
 
For Ubuntu (you did not have to install it, just boot from the installer ISO): don't use Primary GPU because that's for NVidia and set Display to None to make sure the it uses the GPU. 4GB of memory is enough for testing (to rule out memory over-commit), as are 2 cores.

Please don't start the macOS VM before testing with Ubuntu because that may screw everything up. Please reboot the Proxmox host before each test. I don't

I don't know what you put in the other .conf files, so I'm not sure if there is something wrong there. For Ubuntu, I think you don't need any of those files.
Don't blacklist amdgpu and also don't early bind the GPU to vfio-pci because amdgpu should cleanly unbind the GPU from Proxmox, otherwise you need an additional work-around.
It might help to run echo 0 | tee /sys/class/vtconsole/vtcon*/bind before starting the VM to make sure the switch from amdgpu to vfio-pci works without problems.

i turned on the computer, unchecked primary GPU, set display to none, input echo device_specific >'/sys/bus/pci/devices/0000:03:00.0/reset_method', start VM and for the first time, it works! with ubuntu, not hackintosh hahaha...
IMG_20230620_140322.jpg

then i restarted proxmox, did the same thing to hackintosh VM, no luck T_T
even after i tweaked apple OS configuration, same thing, proxmox still crashes after i start VM...

echo 0 | tee /sys/class/vtconsole/vtcon*/bind just shows 0
 
Last edited:
i turned on the computer, unchecked primary GPU, set display to none, input echo device_specific >'/sys/bus/pci/devices/0000:03:00.0/reset_method', start VM and for the first time, it works! with ubuntu, not hackintosh hahaha...
View attachment 51885
At least you now know that your GPU passthrough can work and that the necessary work-arounds (vendor-reset) are in place.
then i restarted proxmox, did the same thing to hackintosh VM, no luck T_T
even after i tweaked apple OS configuration, same thing, proxmox still crashes after i start VM...
Sorry, I can't help with that. At least you have some passthrough working now.
echo 0 | tee /sys/class/vtconsole/vtcon*/bind just shows 0
That's normal and good. It just stops the Proxmox host console gracefully (before you take away the GPU when starting the VM).
 
At least you now know that your GPU passthrough can work and that the necessary work-arounds (vendor-reset) are in place.

Sorry, I can't help with that. At least you have some passthrough working now.

That's normal and good. It just stops the Proxmox host console gracefully (before you take away the GPU when starting the VM).

i have questions
1. how to make echo device_specific >'/sys/bus/pci/devices/0000:03:00.0/reset_method' automatically executed everytime i start proxmox? i tried putting it into /etc/modules and update init but it doesn't work
2. let say no 1 is done, i make ubuntu to start at boot, if i shutdown ubuntu, will proxmox shutdown too? if not, how to make it so? note: right now after i shutdown ubuntu, screen goes to blank. i think it should go back to login page. or can i cheat it? meaning while i am inside ubuntu, go to proxmox server ip address, input shutdown command in the next 2minutes, shutdown now ubuntu...
 
i have questions
1. how to make echo device_specific >'/sys/bus/pci/devices/0000:03:00.0/reset_method' automatically executed everytime i start proxmox? i tried putting it into /etc/modules and update init but it doesn't work
Use cron, rc.local or a hookscipt. Lots of threads about this on this forum.
2. let say no 1 is done, i make ubuntu to start at boot, if i shutdown ubuntu, will proxmox shutdown too? if not, how to make it so?
No but you could make a hookscript that shuts down Proxmox on shutdown of a VM.
note: right now after i shutdown ubuntu, screen goes to blank. i think it should go back to login page. or can i cheat it?
Proxmox does not do the reverse (unbinding vfio-pci and binding the original driver). You can load and rebind amdgpu yourself. See other threads about this on this forum.
meaning while i am inside ubuntu, go to proxmox server ip address, input shutdown command in the next 2minutes, shutdown now ubuntu...
If you go the Proxmox web GUI from inside the VM, you can just click shutdown of Proxmox and the VMs will also be shut down gracefully.
 
Use cron, rc.local or a hookscipt. Lots of threads about this on this forum.

No but you could make a hookscript that shuts down Proxmox on shutdown of a VM.
i think i managed to auto activate vendor reset via crontab...

for auto shutdown proxmox via hookscript, got no luck... @_@

what is the syntax for auto shutdown .pl? i just write
status=$1
if [[ "$status" == "post-stop" ]]; then
shutdown now;
fi
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!