sorry, I didn't get it yet. ens18 is a physical connection to node 2. Node 3 can't communicate with node 1 via this connection.
Do I overthink this? Should I just set the first mesh interface as the failover link for the cluster?
edit: ah, at the end only the IP address matters for the cluster...
Thanks for your quick reply!
I didn't create the cluster yet. Full mesh (at least in my case) means that there is one link to "node 2" and one link to "node 3", both with the same ip address, like this:
# Connected to Node2 (.51)
auto ens18
iface ens18 inet static
address 10.15.15.50...
Hi there,
I'm building a new 3 node PVE cluster. I already have a full mesh prepared for live migrations, Is it possible to use this full mesh as a failover network for the cluster itself?
In the WebUI at "Create Cluster" it doesn't seem to be possible to add both mesh links.
Thanks and greets!
Ok, here it is, the no-subscription repo. I think you can remove /etc/apt/sources.list.d/*, because it's not needed. Does not solve your problem, but make things clearer. Does apt update show something unusual?
Das kann ich bestätigen. Mit PVE/Ceph und einem Proxmox Backup Server verschwimmt die Grenze zwischen Snapshot und Backup. Vor allem kleinere VMs, auf denen nicht allzuviel Veränderung stattfindet, erzeugen ein Backup in wenigen Sekunden.
ok, so the errors occur while you tried to update to the current PVE 7. That's fine, but then your topic is a little bit misleading.
so far I can't see any no-subscription repo. To be sure, please also post the output of cat /etc/apt/sources.list.
can you also post a screenshot of the webGUI...
Welcome to the Proxmox community! :)
to be sure: You have an onboard NIC and added some PCI card with a further NIC?
Can you login at the console? If yes, the output of ip addr would be interesting.
What exactly did you do that lead to this output?
Did you follow these instructions?
what PVE repo do you use?
please post the output of
ls -al /etc/apt/sources.list.d/
cat /etc/apt/sources.list.d/*
yes, you can: https://pve.proxmox.com/wiki/Manual:_datacenter.cfg
At the moment our live migrations use the switches and so the (cross-room) connections between the switches. These are 2x 10G and can easily be saturated by live migrations. That leads to higher latency between the switches, which...
I still didn't get the point.
I try to clarify the network setup:
these are connections to our switches:
1x 10G management (PVE WebGUI)
1x 10G Corosync
2x 25G VM network (bonded)
and these are direct connections, no switches involved:
4x 25G for Ceph "bonded full mesh"
2x 25G VM Migration...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.