I would recommend running ceph on it's own NIC's at least 40gb and NVMe if your IO heavy so that you have enough bandwidth for the ceph inter OSD traffic. as for every 1gb there is 2 x 1gb being generated for the additional replicas to be made.
In my current role for production sizing of ceph for customers I recommend 6 nodes with and ec of 4.2 which is still 66% usable you could run 3.2 with 5 servers at 60% usable.
If you could add 2 servers you would get the resilience you need...
I started at my last employer with 3 nodes then expanded to 7 once we was passed the 12 month POC stage.
it works fine, but you can only ever have 1 server out taking 2 out will cause the ceph to take all the ec pool placement groups offline.
I...
this is fixable with persistent network naming that uses mac addresses and works with any linux distribution using systemd
add "net.ifname-policy=mac" to /etc/default/grub on GRUB_CMDLINE_LINUX line
update-grub
systemctl enable...
use sysctl to disable ipv6 on the interfaces you don't want it
you will find all the values /proc/sys/net/ipv6/conf
under sysctl net.ipv6.conf.<interface>.disable_ipv6 =1
hi all
just looking for information about the SDN and the interfaces it creates.
I have created a vlan SDN on bond0 but its created 4 interfaces instead of the expected 2
13: ln_example@pr_example: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...