These are 3x ovh dedicated servers directly connected to internet with first nic and to their vrack/vlan with second nic.
The second bridge it's used to let the VMs to communicate and to reach other pysical servers I run into the vrack/vlan (via masquerade)
I never used BIRD tbh... :)
thank you soooo much! :)
it worked with:
ip addr add 10.240.10.1/32 dev lo
unfortunately I'm unable to set it correctly on network/interfaces file but isn't a great issue, I'm going to set it at startup.
hi, I like your idea, could you please just explain a bit more? by "lo" do you mean the loopback device? How do the vms could reach such ip if local only to the hosts?
this is my current interfaces file from one my hosts:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet...
Hi, thank you for your reply.
Please have a look at the diagram I added to the main thread. Basically, when I need to migrate a VM from an host to another, I also need to change its (guest) default gateway manually. I don't have a common router.
Hi, I'm currently testing pve features on a 3x nodes cluster.
I have some test vm which I'm able to migrate from host to host easily.
My problem is: I'm using host's ip as current default gateway for the guest os to outgoing to internet, how do I inform/switch the gw in case of a migration?
Do...
I finally spot the problem... Quick note for anyone else in the future can find this thread usefull: it was the wrong interface... :confused:
TL;DR
I run small lab cluster and each node is hosted as bare metal in ovh servers; they are equipped with two network interfaces: one is connected to...
Trying to limit the issue to one thing at once, I'm currently working on port 22 only (macro ssh). If I activate it (in, accept, source my pve cluster ipset), no ssh connections can be made between hosts (attempted through ssh command directly)... If I ssh to their public endpoints... it works...
Hi, yes, it was turned on.
I think I have partially solved my problem adding ALLOW both IN and OUT directions for ceph macro.
Now ceph stays up-green.
Anyways, I'm unable to move data between hosts, for instance if I try to migrate a VM from node to node, task begin but stays there forever...
I honestly don't even know how to enable/disable macros....
Yes, IPSet includes 3x cluster network ips and 3x ceph network ips; double checked it.
I have though 2x bridge interfaces: vmbr0 connected to the public and vmbr1 bridged to 2nd nic.
Hi, thank you, I didn't know they were draggable :-)
Now I'm just using default DROP on INPUT with two rules:
1- allow 8006 (to keep managing)
2- allow for CEPH macro and source/dest set with the ipset containing all the nodes
Anyways, when I'm going to activate the firewall ceph section...
Hi, I have current configuration running on 3x ovh servers' cluster:
vrack vlans:
10.240.10.x -> vlan for servers communications and VMs
10.240.99.x -> vlan for ceph
I want to close ssh ports on al nodes from the outside, keeping them to be able to talk each other and keep everything in sync...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.