Definitely give it a try. As long as you see that ctrl C option for the HBA BIOS when you boot the TrueNAS VM the passthrough itself should be working. If you don't get that far it may be something else.
Okay so it sounds like you'd be going from Storage Spaces to ZFS. The only benchmark comparison I could find was this one that shows on some workflows ZFS is much faster than SS.
In your current server config, are you using an onboard consumer RAID controller to achieve raid5? This may be part of the problem if it doesn't have a a proper cache built in. This could cause high IO delay. This would be solved by running HBA -> ZFS with a large amount of RAM.
I just got...
I mean how long are you waiting for the TrueNAS vm to boot? I assume that is what is getting "stuck"
Do you have a screenshot of what you see on the vm console when it is stuck?
On my LSI card I hit "Ctrl C" when prompted during the VM boot to access the card bios.
Also, now that I'm...
https://forum.proxmox.com/threads/vm-not-starting-after-upgrading-kernel-and-reboot-timeout-waiting-on-systemd.104292/post-449049
This gentleman has provided a guide for how to change to the old kernel permanently by editing the GRUB files. I have not tried this myself.
How long are you waiting for it to boot? My LSI 9207 would take upwards of 2 to 3 full minutes to initialize while booting my TrueNAS VM. Once I disabled the boot function on the LSI card BIOS it takes about 30 seconds.
Can you access shell on the proxmox node and issue command "pveversion"?
The latest kernel (5.13.19-4-pve) seems to be breaking PCI passthrough. If you are running 5.13.19-4-pve, try booting to an older kernel from the GRUB boot menu. 5.13.19-3-pve is working fine for me.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.