Hey All,
New Proxmox cluster and doing some testing. Have dual 10G nics setup as Acive/backup using Linux bond. I run iperf3 between the prox nodes and my TCP retransmissions is low or even zero.
I run it between VM's between the nodes, and I constantly get high retransmissions, in the 1000's. The VM's are Ubuntu 20.04, virtio drivers with QEMU guest agent installed in each of them. To rule out any VM issues, I put them on the same node and run iperf3 and no retransmissions at all, since I assume that traffic is all switched in memory and not going out to the network.
Why would iperf be clean from on the node but showing errors from running inside the VM's?
FYI - I've tested this across both bonds (each 10G setup as active/passive). I also checked the physical interface stats for errors, drops, etc and not seeing anything on the physical node itself.
What else can I do to test and chase down VM networking performance issues? It is getting 9Gb/s thruput but the retransmissions are high.
New Proxmox cluster and doing some testing. Have dual 10G nics setup as Acive/backup using Linux bond. I run iperf3 between the prox nodes and my TCP retransmissions is low or even zero.
I run it between VM's between the nodes, and I constantly get high retransmissions, in the 1000's. The VM's are Ubuntu 20.04, virtio drivers with QEMU guest agent installed in each of them. To rule out any VM issues, I put them on the same node and run iperf3 and no retransmissions at all, since I assume that traffic is all switched in memory and not going out to the network.
Why would iperf be clean from on the node but showing errors from running inside the VM's?
FYI - I've tested this across both bonds (each 10G setup as active/passive). I also checked the physical interface stats for errors, drops, etc and not seeing anything on the physical node itself.
What else can I do to test and chase down VM networking performance issues? It is getting 9Gb/s thruput but the retransmissions are high.
Last edited: