Hi, it seems like a problem I've encountered ...
Try to switch the configuration between en1 and en2.
en1 will have 192.168.0.50/16 and en2 will have your bridge
the first nic shoud be the one that is used to operate the cluster
it solves the same problem for me in the past
Regards
in the switchs I know, when you've got 2 up/down indications, one is for the "electrical" and the other for the "logical"
so maybe the "logical" (often that means network protocol, so 802.3ad in your case) is down
can you "ping" thru the bond ?
Are you sure this bond runs in LACP mode (802.3ad) ... or simple LAG, Round Robin, or other ... ?
I don't see any indication about this (but I dont know this kind of switch) ...
I think what you want is possible but you'll have to setup your public IP block on a VLAN on the Vrack network card, not sure it can run on the Wan netword card.
https://docs.ovh.com/us/en/dedicated/ip-block-vrack/
Hello
We're planning to setup a new cluster with nearly the same setup (3 main node server and 1 backup server).
2 x 10 Gbs for connecting Proxmox hosts to shared storage (iscsi multipath for us) with no Vlan (dedicated to SAN)
1 x 1 Gbs cards for each Proxmox node for admin purpose
3 x 1 Gbs...
Hello
We ve the same configuration (2 servers with vracks at OVH).
Eth0 -> vmbr0 -> Wan / Eth1 -> vmbr1 -> vrack
In my opinion, Vlan are only available on vrack but not on wan interface.
Moreover, Vrack are not easy to setup ...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.