Hello,
When we introduced proxmox for virtualizing some routers we noticed that virtio nics order in qemu guest can shift when removing a vnics that is not the last one
So we accept this behavior and defined best practice that consist to never remove virtio vnics.
During sriov experimentation we mix hostpci and virtio nics in qemu guest.
to avoid arbitrary order we decide to fix hostpci guest iface with udev rules
It works as it designed for, virtio devices are labelled eth0,eth1... hostpci eth50, eth51...
however we noticed that hospci[0..3] shared bus 0 with virtio nics
This mapping results an arbitrary nic order in guest depending on whether we add/remove virtio nics when a hostpci[0..3] exist.
The reason is pciaddr is not predictable so udev static rules matching is arbitrary
again to avoid this we decide to never use hostpci[0..3].
This introduce the limit of 12 hospci vnics in guest, it is not so bad but we need a little more.
At this step I need to decide if we accept this constraint or if we re-map all hostpci to bus 2 by rewriting /usr/share/perl5/PVE/QemuServer/PCI.pm.
I would like to know why hostpci devices are dispatched between bus 0 and bus 2 ?
and what problems I may encounter if I map all the hostpci to bus 2 ? performance issue ?
thank you for your expertise.
maxime
When we introduced proxmox for virtualizing some routers we noticed that virtio nics order in qemu guest can shift when removing a vnics that is not the last one
So we accept this behavior and defined best practice that consist to never remove virtio vnics.
During sriov experimentation we mix hostpci and virtio nics in qemu guest.
to avoid arbitrary order we decide to fix hostpci guest iface with udev rules
It works as it designed for, virtio devices are labelled eth0,eth1... hostpci eth50, eth51...
however we noticed that hospci[0..3] shared bus 0 with virtio nics
This mapping results an arbitrary nic order in guest depending on whether we add/remove virtio nics when a hostpci[0..3] exist.
The reason is pciaddr is not predictable so udev static rules matching is arbitrary
again to avoid this we decide to never use hostpci[0..3].
This introduce the limit of 12 hospci vnics in guest, it is not so bad but we need a little more.
At this step I need to decide if we accept this constraint or if we re-map all hostpci to bus 2 by rewriting /usr/share/perl5/PVE/QemuServer/PCI.pm.
I would like to know why hostpci devices are dispatched between bus 0 and bus 2 ?
and what problems I may encounter if I map all the hostpci to bus 2 ? performance issue ?
thank you for your expertise.
maxime
Last edited: