I recently upgraded one of my servers to a 10gbe sfp+ nic. I essentially did what Udo suggests, with a small twist. I added the card first. I then altered my /etc/network/interfaces file to get the nic to work and test it out. I left the original network connection and IP address intact, and added a second vmbr for the 10gbe nic. In my VMs I switched them from vmbr0 to vmbr1. This way my management interface remained on the 1 gbe network link, and all the VM traffic went over the 10gbe link. I wanted to have two links to the server in case the 10gbe link went down for some reason. I confirmed the VMs were getting the higher bandwidth using iperf tests. Between VMs on the same VLAN, on the same proxmox host, I was achieving a crazy fast number (something like 20K gbps transfer speeds, don't remember exactly). Between a VM's on different hosts, but on the same VLAN, both hosts having 10gbe nics, I was getting close to 9000 gbps (as hoped for), and between VMs on different VLANs, I was getting around 2200gbps. This makes sense since the traffic within the same host on the same VLAN, never really leaves the PCIe bus. And across VLANs it made sense because my pfsense box only has 2.5gbe NICs.