Running Proxmox for web application server (multiple mysql databases + php)

I used to get a 2nd exact same motherboard (typically 2nd hand too) as a backup, i.e. "fallback" and basically have a spare set HDD (now SSD) since they are the part that degrade fastest. Monitors too. :)
 
Thanks for the replies.
We are thinking about having 2 additional servers.

1 main server on the office,
1 server in my college's home
1 server in my home

If the main server goes down, one of the other servers can be used as fallback and if that goes down the 3rd can be used as fallback.
Is this correct?

And how does it work with ip addresses. The webdomain goes to my office IP address. If that server goes down (or the network goes down), do i manually need to update my ip address in the DNS record?

Also i am thinking of installing 3CX on the server for my phone connection.
Could this be synced as well, so if 1 server goes down my phone system stays online?
 
Keep in mind that you want a low latency for your cluster. So a node at home may not work in case the latency is too high. Recommended would be <1ms but might work a bit higher.
 
But having 3 servers in my office wouldn't help if internet or power goes down.
A UPS could help with power outage but that can't keep the system running for hours

What would happen if latency is too high?
 
But having 3 servers in my office wouldn't help if internet or power goes down.
A UPS could help with power outage but that can't keep the system running for hours

What would happen if latency is too high?
It may corrupt corosync synchronization, quorum lost, etc.
 
Last edited:
I run everything in my house. I have never been hosted and have always self-hosted since 1995.

My advice is to do it yourself. you may even end up provisioning for others. Experience is the best teacher. Do not be put off by those hyper.

I have about six servers like your specs e.g. Dell R720, R620, some 128GB, some 64GB, etc...

I have been running all my online teachings, website, self-hosted git, CVS, etc... for many decades.

You have made the best and right decision by going self-hosted.

Get Servers with H310 card or HBA for your storage.

Don't look back, go for it and take from there.

God blesses!!!

Regards
 
My idea to host my applications myself isn't a stupid thing? I mean, i don't have power backup or network backup but apart from that it would be okay to host it myself and have the server in my office?
Have you considered "telehousing"? Put your physical machine into a fully or semi-managed secure facility or "co-lo". That is when you, possibly after install and config, possibly before, drop the physical machine at your friendly local data-centre type place. In NZ, it's literally called Data Centre on 220 Queen st where you can plug a cable into 170 other teleco's including NZIX the main "unrated peering" exchange (a huge switch?) that got moved here eventually, no idea where it was beforehand. Also it is 360 metres from the main Telecom NZ exchange in Majoral drive, which kinda plugs you into all the fibre to the home lines in NZ. Apparently they will even let you plug a cable in yourself (unheard of in other parts of world).
 
It may corrupt corosync synchronization, quorum lost, etc.
**Never** use a qdevice / qnetdaemon; if you have even number in your cluster give 1 machine 2 votes is much better. THan using a raspi-pi or non-cluster machine / workstation as quorum like me. I regretted it. I'm loving 2 votes for Hulk (rack server) no more fencing surprise. I reckon only valid use would be a cloud based quorum qdevice since more likely always omnipresent.

1 hour corosync event
After rolling back an OS update that went bad on 'Hulk' i noticed corosync was saturating logs showing sync behaviour replaying the uninstall that i did in bulk using timeshift. It was impressive to see. I felt sure it was all good. But it never stopped.
I had a weird issue with a two node cluster (rack server with 2 votes, the other node 1) where i made a timeshift (rsync'ed hardlinks) backup of the main proxmox host prior to doing 'apt update; apt upgrade' which had weird effects locking me out of Web UI so I rolled back via ssh, and after rebooting i think it was corosync took 1 hour to resync. staring at the logs i was seeing net interfaces popping up and down and felt this was for sure hindering the sync, so after some reboots of all nodes still no go, by 45 mins in, i started hot swapping ethernet leads and finally once i plugged the original cable back in it was something to do with link auto-negotiation it popped back to 1GB which was great (i had run at 100MB/s for what must have been over month unable to figure it out) and this let the re-sync finish; bam, I was back into the Web UI amazing proxmox is. Maybe use 2 cables etc.
 
Never** use a qdevice / qnetdaemon; if you have even number in your cluster give 1 machine 2 votes is much better.
It's really not. If your two-vote machine fails you will loose quroum. There is a reason why this is not recommended in the manual. The qdevice is needed for two-cluster nodes and recommended for even numbers in a cluster. It's not recommended ( and problrmatic) to add it to clusters with an uneven number of nodes: https://pve.proxmox.com/wiki/Cluster_Manager#_corosync_external_vote_support
I also don't get why you answered to a thread from 2023, ai bot gone wild?

It should also be noted that you should have a dedicated network for corosync so the cluster will still work even if your regular network fails or gets congested
 
Last edited:
  • Like
Reactions: janus57 and UdoB