network virtio and hostpci bus mapping in qemu guest

auranext

Well-Known Member
Jun 5, 2018
54
2
48
123
Hello,

When we introduced proxmox for virtualizing some routers we noticed that virtio nics order in qemu guest can shift when removing a vnics that is not the last one
So we accept this behavior and defined best practice that consist to never remove virtio vnics.
During sriov experimentation we mix hostpci and virtio nics in qemu guest.
to avoid arbitrary order we decide to fix hostpci guest iface with udev rules
It works as it designed for, virtio devices are labelled eth0,eth1... hostpci eth50, eth51...
however we noticed that hospci[0..3] shared bus 0 with virtio nics
This mapping results an arbitrary nic order in guest depending on whether we add/remove virtio nics when a hostpci[0..3] exist.
The reason is pciaddr is not predictable so udev static rules matching is arbitrary
again to avoid this we decide to never use hostpci[0..3].
This introduce the limit of 12 hospci vnics in guest, it is not so bad but we need a little more.
At this step I need to decide if we accept this constraint or if we re-map all hostpci to bus 2 by rewriting /usr/share/perl5/PVE/QemuServer/PCI.pm.

I would like to know why hostpci devices are dispatched between bus 0 and bus 2 ?
and what problems I may encounter if I map all the hostpci to bus 2 ? performance issue ?

thank you for your expertise.

maxime
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!