Hi,
On my Proxmox-Server (=host) I made a ZFS-SMB-Share. It works quite good and fast (around 150-200 mb/s for the guests).
But now I installed a Dual-NIC into my server. I created vmbr1 and vmbr2 (both are not given to any guest at this moment). The problem is that now every guest (still got vmbr0 as NIC) connects through the external switch from eth1 (=vmbr0) to vmbr1 (=enp5s0) and vbmr2 (=enp6s0). Thus my speed drops from around 170 mb/s down to 25-30 mb/s for any internal traffic. How can I set the internal traffic to be done over the internal "switch" within the virtual 10gbe vmbr0?
sudo route
* Kernel IP routing table
* Destination Gateway Genmask Flags Metric Ref Use Iface
* default 192.168.0.1 0.0.0.0 UG 0 0 0 vmbr0
* 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
* 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr2
* 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr3
				
			On my Proxmox-Server (=host) I made a ZFS-SMB-Share. It works quite good and fast (around 150-200 mb/s for the guests).
But now I installed a Dual-NIC into my server. I created vmbr1 and vmbr2 (both are not given to any guest at this moment). The problem is that now every guest (still got vmbr0 as NIC) connects through the external switch from eth1 (=vmbr0) to vmbr1 (=enp5s0) and vbmr2 (=enp6s0). Thus my speed drops from around 170 mb/s down to 25-30 mb/s for any internal traffic. How can I set the internal traffic to be done over the internal "switch" within the virtual 10gbe vmbr0?
sudo route
* Kernel IP routing table
* Destination Gateway Genmask Flags Metric Ref Use Iface
* default 192.168.0.1 0.0.0.0 UG 0 0 0 vmbr0
* 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr0
* 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr2
* 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vmbr3
			
				Last edited: