node1->node2 & node1->node3 vs dual 10G LACP are in theory same througput
you can move pve management to ens19&20 bond and use 2x1Gbps links unbonded for corosync.
You can use mesh in 3 nodes. But when you will want add more nodes in future, it will limit you.
Hiding default port on different number is security by obscurity.
I think, we need root password in cluster join, if i remember correctly (it's some long time, when i created new cluster).
Do you think, ceph = zfs?
https://pve.proxmox.com/wiki/Performance_Tweaks#Disk_Cache
MTU 9k doesn't have such effect as you think (and has some own problems).
Where are your performance tests? What's your ceph config?
Old and new subnets are same vlan or different? The main rule is, multiple public_network need to be able to communicate.
You need to reconfigure monitors, so it's important to read documentation (and in some cases, test such migration).
Two points:
1a] in one-bond mode if bond fail, you lost all conectivity
1b] in two-bond mode if one bond fail, you still have something left (usabiliy depends...)
2a] in one-bond mode bond sides select, what iface will be used to send datastream
2b] in two-bond mode admin decide, what ifaces...
Nothing specific now. I can only suggest this for now:
1] monitor sys values
2] regular fast checks nfs availabilty
3] regular fast checks datadisk availabilty
4] regular fast checks raid controller raid availabilty
For now it looks as not problem with performance, but with size. And because...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.