Interest in VPP (Vector Packet Processing) as a dataplane option for Proxmox

ryosuke.nakayama

New Member
Mar 10, 2026
3
0
1
Hi everyone,

I'm a Proxmox user running it in a home lab and research environment, and I'd like to share an idea and gauge the community's interest.

I've been experimenting with FD.io VPP (Vector Packet Processing) on a Proxmox host and wanted to ask whether there's any interest — from the development team or the community — in deeper integration of VPP as a dataplane option.

Background & Motivation

I recently installed VPP 26.02 on Proxmox (Debian trixie) and confirmed basic operation. VPP offers a rich set of plugins (SRv6, PPPoE, NAT, VXLAN, WireGuard, and more) and delivers high throughput via its DPDK backend. I see potential for VPP to complement or replace the existing network dataplane (Linux Bridge / OVS) in certain use cases.

Some use cases I'm interested in: high-performance East-West forwarding between VMs and containers, traffic steering via SRv6, PPPoE Access Concentrator termination at line rate, and high-speed NAT for edge/ISP-like setups.
 
@spirit Right, that's the key blocker. QEMU supports vhost-user natively, but qemu-server doesn't expose it.

I took a look at the qemu-server source on git.proxmox.com and the change looks fairly scoped — mainly adding vhost-user as a netdev backend option and generating the right QEMU arguments (-chardev socket + -netdev vhost-user).

I'm willing to write a patch for this. But before spending time on it, I'd love to hear from the dev team: would vhost-user-net support be something you'd consider merging? Any design guidelines I should follow, especially around migration and HA?

My use case: VPP on the host handling SRv6 steering and NAT at near line rate, with VMs staying on virtio-net.
 
@spirit Right, that's the key blocker. QEMU supports vhost-user natively, but qemu-server doesn't expose it.

I took a look at the qemu-server source on git.proxmox.com and the change looks fairly scoped — mainly adding vhost-user as a netdev backend option and generating the right QEMU arguments (-chardev socket + -netdev vhost-user).

I'm willing to write a patch for this. But before spending time on it, I'd love to hear from the dev team: would vhost-user-net support be something you'd consider merging? Any design guidelines I should follow, especially around migration and HA?

My use case: VPP on the host handling SRv6 steering and NAT at near line rate, with VMs staying on virtio-net.
maybe the best way is to ask to the dev mailing list pve-devel@lists.proxmox.com. (I'm pretty sure that some users could be interested for routers vm appliance)

the basic dev doc for patch submission is here:
https://pve.proxmox.com/wiki/Developer_Documentation


does it expose a vswitch ? if yes, maybe detect the vswitch type && auto enable vhost-user.
HA should work, no problem.
for live migration, if it's not supported, a check should be added in the code.
maybe also block to enable firewall on the vm nic, as it's kernel firewall,it'll not work
 
  • Like
Reactions: ryosuke.nakayama
I'm curious why would it be preferred over Open switch+DPDK.

In my case, I use mostly EVPN/VXLAN that requires host terminated tunnels, but there's always the need for network functions that just go with VLAN bridging to the physical network that might benefit with raw performance.
 
I'm curious why would it be preferred over Open switch+DPDK.
As far as I understand the architecture it is for one part the single packet processing for an entire pipeline vs. handling batches of packets (aka vectors) at each step of the pipeline. The latter one saves time when loading the cpu instructions for each step as it has to be done only once for each vector and not for every single packet.
The other part is potential flexibility in packet processing pipeline. OVS can handle only what is written in its code base. VPP allows adding additional handlers without changing the source project.
 
maybe the best way is to ask to the dev mailing list pve-devel@lists.proxmox.com. (I'm pretty sure that some users could be interested for routers vm appliance)

the basic dev doc for patch submission is here:
https://pve.proxmox.com/wiki/Developer_Documentation


does it expose a vswitch ? if yes, maybe detect the vswitch type && auto enable vhost-user.
HA should work, no problem.
for live migration, if it's not supported, a check should be added in the code.
maybe also block to enable firewall on the vm nic, as it's kernel firewall,it'll not work
Thanks for the feedback and the pointer to pve-devel.To your questions: yes, VPP exposes a vswitch (L2 bridge-domain) with vhost-user sockets, so auto-detection should be doable. For live migration, I'll add a check to block it when vhost-user is in use unless the target host is ready. And agreed on the firewall — I'll make sure firewall=1 is rejected on vhost-user NICs since kernel-based filtering won't work.I'll put together an RFC and send it to the mailing list. Thanks again for the guidance.