Question about communication between host and vms

m3a2r1

Active Member
Feb 23, 2020
162
5
38
48
I've installed FreeNAS as vm and there are disks for all vms on that host. There are 4 Windows Server vms too on that node.
Bridge vmbr0 is set on eth interface, I've got 2 sfp interfaces too.
My question is: how should I set vmbr0 interface to the fastest communication between vms. Will all machines read their disks by slower eth connection if vmbr0 is set to eth? Should I set it to sfp and other vms too? Explain me how it works, please.
 
Last edited:
Do I understand you correctly, that you have one big FreeNAS VM which gets all the storage via a PCI pass through of the controller? The FreeNAS VM then exports the storage and you use this as the main storage for VMs again on the PVE node?

In this case, the vmbr0 acts just as a switch for the internal network between the PVE node and the FreeNAS VM. It does not matter how fast the physical NIC which is the "bridge port" is because the traffic never leaves the bridge itself. The limiting factor will be how much data the CPU can push through.

I personally would avoid such complicated setups as they cost more performance and result in interesting dependency chains ;)
 
Exactly it's FreeNAS VM with NFS and other machines have their disks on that NFS share. So if I understood correctly , I should put all vms in one bridge. If it standalone node, eth bridge will be sufficient and if I'll cluster another nodes, that bridge should be on sfp nic to inter-nodes fast connecting,
Am I right?