Hello, complete networking newbie here. Barely managed to figured how how to restore network connection to a Proxmox box due to adding a PCIe device. I'd greatly appreciate a little pointer.
I'm having trouble figuring out how to directly link 2 nodes so that ZFS replication and VM migration could happen faster (?) and create less traffic congestion on through the usual link for other containers/VM.
I'm guessing I should be looking at this:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
But which setup should I use?
Currently I have 1 machine with following /etc/network/interfaces:
Second machine is a lower power/slower machine but also has 2 NIC. So I'm looking to utilise both NIC in both machines somehow.
Alternatively, what is the best use for the 2 NIC on those machines? The 5-port switch is a bog standard cheap TL-SG105. What bonding options do I have to help speed up VM migration?
(yes, for High Availability, I have set up a QDevice on a NAS)
Thanks very much
I'm having trouble figuring out how to directly link 2 nodes so that ZFS replication and VM migration could happen faster (?) and create less traffic congestion on through the usual link for other containers/VM.
I'm guessing I should be looking at this:
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
But which setup should I use?
Currently I have 1 machine with following /etc/network/interfaces:
Code:
auto lo
iface lo inet loopback
iface enp5s0 inet manual
auto vmbr0
iface vmbr0 inet static
address 192.168.5.2/24
gateway 192.168.5.1
bridge-ports enp4s0
bridge-stp off
bridge-fd 0
iface enp4s0 inet manual
source /etc/network/interfaces.d/*
Second machine is a lower power/slower machine but also has 2 NIC. So I'm looking to utilise both NIC in both machines somehow.
Alternatively, what is the best use for the 2 NIC on those machines? The 5-port switch is a bog standard cheap TL-SG105. What bonding options do I have to help speed up VM migration?
(yes, for High Availability, I have set up a QDevice on a NAS)
Thanks very much