I have a VMware environment that I use as an all-in-one with a Solaris guest providing storage to the VMware host. This is possible because I can pass through the PCI JBOD controller to the Solaris VM so it will see all the physical disks. It's not an uncommon setup & you can see how I've written about it here & here.
However, I'm getting tired of VMware & it's management bloat. It's not ideal for a small homelab. I want to convert to Proxmox, but I need to figure out the passthrough step.
This is what I've seen: http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM
Does this still apply to Proxmox? I'm unclear about which steps I'd need to take. I'd need to recompile the kernel?? Hopefully I'd just be able to do steps 4 & down, but wanted to be sure before blowing away my VMware install:
01:00.0 0200: 8086:10b9 (rev 06)
...
However, I'm getting tired of VMware & it's management bloat. It's not ideal for a small homelab. I want to convert to Proxmox, but I need to figure out the passthrough step.
This is what I've seen: http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM
Does this still apply to Proxmox? I'm unclear about which steps I'd need to take. I'd need to recompile the kernel?? Hopefully I'd just be able to do steps 4 & down, but wanted to be sure before blowing away my VMware install:
- Load the PCI Stub Driver if it is compiled as a module
- lspci -n
- locate the entry for device 01:00.0 and note down the vendor & device ID 8086:10b9
01:00.0 0200: 8086:10b9 (rev 06)
...
- echo "8086 10b9" > /sys/bus/pci/drivers/pci-stub/new_id
- echo 0000:01:00.0 > /sys/bus/pci/devices/0000:01:00.0/driver/unbind
- echo 0000:01:00.0 > /sys/bus/pci/drivers/pci-stub/bind
- modprobe kvm
- modprobe kvm-intel
- /usr/local/bin/qemu-system-x86_64 -m 512 -boot c -net none -hda /root/ia32e_rhel5u1.img -device pci-assign,host=01:00.0