So maybe I do not understand something... I have 2 10Gbps nic cards (they each have two ports)
NIC 1: Port1 and Port2
NIC 2: Port1 and Port2
I do not have money to by a 10Gbps RJ-45 switch for these two nodes. So I thought I could use a standard CAT 6e cable between Nic1 Port1 and Nic2...
Well this is very odd... I removed the SDN config and rebuilt it. did not change the underlying settings they all were the same (vbrs etc). But now jumbo frames are working.
BTW, the MTU setting was 9000 across the board last time as well, I've only been reverted to get the link working from...
Hi yes... there is no switch this is direct NIC to NIC.
There does not seem to be any response from the ping:
root@pvenode01:~# ping 192.168.0.2
PING 192.168.0.2 (192.168.0.2) 56(84) bytes of data.
64 bytes from 192.168.0.2: icmp_seq=1 ttl=64 time=0.152 ms
64 bytes from 192.168.0.2: icmp_seq=2...
This is from Node 02:
6: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq master vmbr2 state UP group default qlen 1000
link/ether 3c:ec:ef:1b:c6:dc brd ff:ff:ff:ff:ff:ff
12: vmbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP group default qlen 1000...
I do not know why it worked either... But was able to add it back and it's working
, maybe something was not quite loaded as you said.
BTW, does this only work with MTU 1500? I noticed when I go to 9000 on the interfaces and bridge (and 8950 for the VXLAN zone and vms) all hell breaks loose...
By removing the gateway from the vmbr2 on both nodes it seemed to work... I can now ping vm to vm.
I want a dedicated link between nodes (basically NIC to NIC, no switch between) and as long as both use the same subnet no need for any gateway. But should it have worked with it set?
iface lo inet loopback
iface eno1 inet manual
I am trying to setup VXLAN using the SDN feature as outlined here:
My requirement is to basically use a 10G interface between the two nodes for better VM to VM transfer rates (NAS replica). Maybe there are better...
See the following [03:00.1 and 03:00.1]:
root@pvenode02:~# lspci -k| sed -n '/Ethernet/,/driver in use/p'
03:00.0 Ethernet controller: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 (rev 01)
Subsystem: Super Micro Computer Inc Ethernet Controller 10-Gigabit X540-AT2...
Thanks for the help... yes maybe it would help to clarify.
I will be passing the "dpool01/Media" as a large disk to the TrueNAS vm (virtIO). The "dpool01/Security" to my zoneminder vm again as a large disk. The others prob as iSCSI or more likely NFS. Only Media is going to TrueNAS. I like...
OK here is what I'm planning on for this... I will be giving the Media Datasets to the TrueNAS VM, the rest can go to other vms for other purposes. Any problems/suggestions? Please note: I have anther zpool for the VMs OS disk (NVME 1 x 1TB mirror), so this pool is just for large data sets...
Raidz2 + Raidz2 not as a mirror but striped. I was experimenting with that so when I want to add more drives how best to add to an existing raidz2. To grow to pool basically. As I know you cannot just add disks to raidz2.
So just practicing to see if I do a raidz2 with 8 drives how best to...