(2nd Try) - Failover IP Help/Questions

pr0927

New Member
Jul 4, 2024
24
0
1
Hi all, tried to post this earlier and the entire thread just...vanished into the unknown.

Anyway, I'm not quite at the stage of setting up an HA cluster, but I'm marching in that direction. However, one thing has been confusing me, and looking online hasn't been too helpful - and ChatGPT consulting has been...not super confidence inspiring.


Basically, I have three nodes I will be putting in a cluster (with roughly equal storage and ZFS-replicated content). My understanding is that in the event of failover or migration, the VMs and LXCs on a different node are "copies" of one another and will spin up with the assigned IP address - this makes reverse proxy pointing and other local network navigating relatively seamless.


I have one Debian VM running Docker containers. I also have one LXC running Cockpit for SMB sharing.


My Proxmox host is running an NFS server (on the host itself) - so that I can map NFS shares to containers for certain volume paths, since I cannot pass through my whole SATA controller to a VM - performance has been perfectly fine and working great! I did not setup my NFS shares through Cockpit to minimize any additional overhead.


My concern is when a given node goes offline, even if I have the NFS server manually installed on each node with the same shares and all...the IP address of the given node that will take over, will be different.


Meaning, any Docker containers pointing to NFS mounts will now be pointing at NFS paths that are currently offline, since they use IP addresses.


No Gluster or Ceph here - just doing ZFS replication between the systems, not a separate fileserver NFS pool or anything.


What is the smart person's workaround here? Is there any kind of virtual IP that all nodes can share and assume in the event they are the "active" node?


ChatGPT mentioned something about running "keepalived" on each node, but I've also been arguing with it over every tiny thing I have been doing, so I don't know how correct it is...


Would appreciate any help/explanation for this - especially before I go down a road from which making changes will be a real pain.
 
My understanding is that in the event of failover or migration, the VMs and LXCs on a different node are "copies" of one another and will spin up with the assigned IP address - this makes reverse proxy pointing and other local network navigating relatively seamless.
They are out-of-sync copies and you need to know this. If you need always-up-to-date data (e.g. for a database), you need to consider other options than ZFS like CEPH or another dedicated or distributed shared storage system. HA and ZFS work together in a cluster, yet it is a PITA to setup, maintain and move around compared to a real shared storage, in which you don't need to do anything.

My concern is when a given node goes offline, even if I have the NFS server manually installed on each node with the same shares and all...the IP address of the given node that will take over, will be different.
They may see older data as described above.

What is the smart person's workaround here? Is there any kind of virtual IP that all nodes can share and assume in the event they are the "active" node?
Move the stuff into a VM and use its IP address.
 
They are out-of-sync copies and you need to know this. If you need always-up-to-date data (e.g. for a database), you need to consider other options than ZFS like CEPH or another dedicated or distributed shared storage system. HA and ZFS work together in a cluster, yet it is a PITA to setup, maintain and move around compared to a real shared storage, in which you don't need to do anything.
Yeah, I know but that's OK, this is for a homelab, I can do periodic syncs several times a day. Not a problem.

Move the stuff into a VM and use its IP address.
I can't - the drives with the content are 4x8TB HDDs, via SATA. I can't pass my SATA controller to a VM. This is why I do the NFS shares.

Changing the host IP in failover scenario or putting the NFS server inside an LXC Cockpit container seem to be the only realistic options here.
 
They are out-of-sync copies and you need to know this. If you need always-up-to-date data (e.g. for a database), you need to consider other options than ZFS like CEPH or another dedicated or distributed shared storage system. HA and ZFS work together in a cluster, yet it is a PITA to setup, maintain and move around compared to a real shared storage, in which you don't need to do anything.
"compared to a real shared storage"

Would this include say a truenas shared storage? Meaning, do the headaches somwehwat go away after that?

I love the idea of ceph, and have set up a cluster to test and play with, but what's holding me back is the network requirments. Can't really justify the hardware expense at this point (though I will if push comes to shove), and would love to use the current servers I have.

THe idea was to use one of them as a truenas nas running zfs, and then sharing that with the three servers. But honestly, I'm getting lost in the research as there are just so many ways to do this I suppose.
 
THe idea was to use one of them as a truenas nas running zfs, and then sharing that with the three servers. But honestly, I'm getting lost in the research as there are just so many ways to do this I suppose.
I would not consider ONE TrueNAS box as a valid storage in comparison to ZFS with replication or CEPH. It's one box and if it fails, you will end up with a complete cluster outage. If you can live with this, go with it. CEPH would be the solution that is the easiest to setup and maintain over time.
 
  • Like
Reactions: UdoB

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!