pve - PCI passthrough

Ingenieur

New Member
Oct 28, 2010
3
0
1
Düsseldorf, Germany
Hello to all of you,

pve is great.
I recently started to set-up a virtual box for my home.
I started with Xen Server which is a great tool.
I swapped to pve as it allows a more hands on work on some specific things.
My overall goal is to set-up a box runnig a virtualized Windows for home automation and entertainment (TV streaming with Team-mediaportal via DVB-T and Cine S2 Satellite card as well as some linux instances for Intranet, adress management etc. (e.g. www.amahi.org)

To have the Windows guest being able to communicate with the SAT/DVB-T cards I have to enable the so called PCI Passthrough where the host passes the PCI resources directly to the guest.

Has somone made some experiences with it?
Any ideas which can help?


Thanks in advance for all your Input.

Best regards

Reinhard

Hardware used: Asus M4A78-E, AMD 5050e, Chipsatz 790GX. 8GB Ram, HDD WD EADS 1,5TB green
 
pci passthrough is not very user friendly, we do not recommend this as it works only for some special hardware setups and its more or less impossible to have all this hardware here in our lab for testing.

to start trying, take a look into the man pages of qm (man qm), use hostpci.
 
If Windows is not needed, OpenVZ has better access to hardware. I ran MythTV with DVB card, was working very well. You'll need to find and give access to all dvb devices(under /dev) for container, that's the hardest part.
 
vlad, so you have/had a Mythtv backend running on Proxmox? How much disk space is attached and how did you attach it? Is the disk fast enough to record HD content?

I was thinking of moving my mythtv backend to Proxmox. My HD tuner is on the network so hardware should not be an issue.
 
I did not try HD, just SD(4 tuners) and did up to 5-6 simultaneous recordings, but I don't see a reason for HD not to work. I used 6Tb softraid5 and "mount --bind" to link it inside container.
 
I am currently using PCI Passthru on VMware ESXi and it works just fine, within its limits -- as pointed out above, it is only supported on machines that have proper VT-d support in both their hardware and their BIOS. You can use the ESXi and Xen IOMMU compatibility lists to build a list of working hardware that will do it, anything that works with both ESXi and Xen will work with KVM. That said, I have to agree with the above that unless you have something that must run in a non-ProxMox kernel (such as a driver that only works in a 32-bit kernel, or of course the entire Windows kernel ;), containerization is definitely the better way to go here.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!