With networking you can test part by part, :)
your eth0 works, then you can try get eth0:0 working with one ip. If this works fully (pinging,traceroute,..) you can try bridging or adding eth0:1 and work you way further.
Goodluck :)
i can't help much further, i tried explaining like i would do it.
You can send an email to hetzner to get you a working example, I am sure they know their infrastructure a lot better then me.
configure your eth0 with the 5.x.x.205 ip => this worked in the beginning.
bridge eth0:0 with vmbr0.
This cant be done through the proxmox gui but must be done in the network interfaces file, as proxmox gui doesn't handle the alias( eth0:0)
give the bridge the 5.x.x.220 address
Try pinging it...
@ghusson for interconnecting the servers and creating an overlay network i use tinc. It works like a charm!!!
I tried ovs, the linux networking with GRE tunnels but it wasn't great and proxmox 4.4can'tt handle OSPF due to a kernel panic. so no failover
I implemented Tinc...
If you bridge your eth0, you normally give the vmbr0 an ip and not the eth0 => this can lead to strange network behaviour.
for the proxmox:
I noticed you added the extra ip's with a /27 subnet or 255.255.255.224. I think you should add your extra addresses with a /32 or 255.255.255.255 subnet...
pulled this information from the hetzner documentation:
https://wiki.hetzner.de/index.php/Zusaetzliche_IP-Adressen/en
Individual addresses
The assigned addresses can be configured as additional addresses on the network interface. To ensure the IP addresses are configured after a restart, the...
I configured an HA cluster and also thought about Pfsense, as i was already familiar with it. But the moment you want an overlay network for you VM's so they can speak to each other you need a pfsense on each proxmox with virtuaIP (=> ovs doesn't work well with multicast) or the pfsense must...
Good to hear this works on proxmox.
I already got my setup working with ucarp, because i needed to give KVM instances a public ip with a shared gateway accross the proxmox nodes. Maybe in the future i will tackle keepalived or plain CARP for balancing.
thnqx for the information.
Supp,
I am working on HA network solution and i was thinking of implementing carp or keepalived.
Would you like to share your working configuration?
I thought the linux networking acted funky with multicast needed vor the virtual ip's, and openvswitch is still a no go with multicast.
I tracked down the issue to a wrong network file.
I have Openvswitch running along with the networking.
A wrong configuration in the /etc/network/interfaces was the problem, this created the hang.
Thnqx for the fast reply
If i acces the /boot folder, the following files are present:
grub/
initrd
vmlinuz
vmlinuz-4.4.35-1-pve
config-4.4.35-1-pve
pve/
System.map-4.4.35-1-pve
initrd.img-4.4.35-1-pve
Where would other kernels be located?
I can't upgrade, because i don't have a valid subscription yet. and...
I have a full pve envirement running with 3 nodes with block storage ceph for HA with packet (bare metal). The cluster was running great. Today i was working on an overlay network and after every change i rebooted the server (call me lazy). After 5 or 6 reboots the serer hang during boot. Packet...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.