Thanks for the reply, I guess I was just hoping this VLAN setup was enough to separate CEPH from corosync. Seems like this was hopeful. I'll be pushing to get this changed and give them each their own separate, physical, NICs.
This seems pretty clever, thanks for the tip. Also making me look at the bond-mode of the current NICs with is 802.3ad, so again I'd say less than ideal.
No, we do not. The only NICs on the box are the 2 10G, and they're being bonded. There is actually a third VLAN on the bonded interface as well but its for creating an internal VM network.
In our 3 node cluster we currently have 2 10G ports that are bonded, and then we set separate VLAN's on the bonded NIC for Ceph Storage traffic (say vlan 1) and proxmox VM traffic (say vlan 2). So we have bond0.1 and bond0.2 traffic.
I was under the impression this was not ideal as the bonded...
Is the WAN bridge or the NIC associated with the bridge configured to be used anywhere else? I'm using pfsense inside a vm and haven't ever ran into this.
Does the proxmox box itself get a WAN ip?
Recently hired, I don't think this is a perfect proxmox setup and would like to improve it.
Currently, our proxmox cluster has its 3 nodes assigned public IP addresses that we use to connect to them. Fairly randomly, we lose http connection to a node(s) and then just connect to one of the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.