Failover option advise, Please?

jojo90014

New Member
May 2, 2024
4
2
3
Hello. I am vey new to this, and already have one server up and running, with 20 windows vm, which users can access via Guacamole (ONE SSD only, with guest and server installed in it)

I would like to add a second server, or two more servers if needed, to automatically activate if the main server fails, so everyone can still access their windows vm in Guacamole.

My questions is, TAKING THE BELOW into consideration, is it possible and what what would be the best, least expensive way to go about doing this?

1. Machines: I would love not to have to spend more money.
- Server 1 has 4TB single storage.
- My second machine, to be Server 2 also has 4TB single storage.
- My third machine to be Server 3 if necessary, has a 256GB + 1TB storage.


2. Server 1 has only one SSD. Obviously the best option would be to use the server 1 as is, since it is already up and working with 20 VMs, then add server 2 and/or 3, but it currently has only ONE SSD sharing install and the VMs. I would not know how to approach changing that to 2 SSDs, without losing the VMs that are already functioning.

3. I can add server 2, and a 3rd if needed. Clients only access the VM to read a main database stored in our local main server, so there is no need to have an exact replication. Meaning, if the server 1 fails, the server 2 does not have to be the exact live copy. The latest copy from one day ago would be fine.

4. I also have a nice, very fast Synology NAS for storage, if that helps with sharing storage. As long as you think it performs okay, it would be great not to add a second SSD into each server. (which I read at one place that will probably not be a good option to store all my VMs as shared storage?!?!)

5. Each VM is individually accessed via Guacamole. So if the Server 1 fails, Server 2 has to activate automatically, and the VMs need to be accessed, being the same IP address as their original VMs, so our reps can still access the VMs via guacamole.


I would think this is something that can be done, but I cannot really find an answer in the forums on how to do this (and best way), specially because I am so new to this.

I've already watched some 25+ videos, but they are mostly for people who already know terms and proxmox better, and they are never really complete. I am quick to learn, so I did get a lot from it.
I know there's CEPH, ZFS and other options, but I don't know which way to go... and the video always lacks the info about:

- Do I need only one SSD per server and share installation and VMs
- or if I really need 2 SSDs O
- or if only one SSD for install and VMs on Synology...

Again, my current server 1, is already up an running with a shared SSD for install and VMs, and I would hate to lose all my VMs and start all over... AND have to buy new SSDs for the machines I already have.

Your advise and help would be greatly appreciated.

Thank you so much!!!
 
Last edited:
Hi @jojo90014 , welcome to Proxmox.

I would like to add a second server, or two more servers if needed, to automatically activate if the main server fails
NO REDUNDANCY IS NEEDED.
These two statements are contradictory. Automatic Server/VM activation provides redundancy.
Perhaps you meant that you don't want to run a cluster. It's possible to deploy that way, however, that would introduce a few limitations:
- you cant run built-in Ceph implementation
- you cant do built-in ZFS replication
Both of those require a PVE cluster.

It sounds to me like you are looking for Application level redundancy. In that case, you can have 3 independent PVE nodes, pre-populate them with VMs created from the template, and design activation to your needs on your own. It will be a highly custom design that you will be responsible for.

Yes, you can store VM disks on an external NAS, and you can prepopulate the VM config on each independent node. Again, this will be a non-standard custom configuration. It would be best if you spent a lot of time in the lab doing this. You will need to become a PVE expert to avoid data loss/corruption. PVE is not meant to be run that way.

. Each VM is individually accessed via Guacamole. So if the Server 1 fails, Server 2 has to activate automatically, and the VMs need to be accessed, being the same IP address as their original VMs, so our reps can still access the VMs via guacamole.
This is what a standard PVE cluster is for. Technically Server2 is always active, it's the VM that migrates to Server2 and becomes active there.
To achieve this you need shared storage. The Synology can be that storage. Whether it will be able to handle the load is for you to find out.

It sounds like you are trying to design a production system on the fly with production data. It can be done, but the adage "measure twice cut once" is multiplied 10x.

Watch some YouTube tutorials on PVE clustering and shared storage. Install a nested (virtual) PVE cluster and experiment there. Or, contact Proxmox Partner for Professional Services assistance.

Good luck

P.S. don't run 2-node cluster - its not a supported configuration. Either use 3 nodes, or read up on quorum/q-disk device.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @jojo90014 , welcome to Proxmox.



These two statements are contradictory. Automatic Server/VM activation provides redundancy.
Perhaps you meant that you don't want to run a cluster. It's possible to deploy that way, however, that would introduce a few limitations:
- you cant run built-in Ceph implementation
- you cant do built-in ZFS replication
Both of those require a PVE cluster.

It sounds to me like you are looking for Application level redundancy. In that case, you can have 3 independent PVE nodes, pre-populate them with VMs created from the template, and design activation to your needs on your own. It will be a highly custom design that you will be responsible for.

Yes, you can store VM disks on an external NAS, and you can prepopulate the VM config on each independent node. Again, this will be a non-standard custom configuration. It would be best if you spent a lot of time in the lab doing this. You will need to become a PVE expert to avoid data loss/corruption. PVE is not meant to be run that way.


This is what a standard PVE cluster is for. Technically Server2 is always active, it's the VM that migrates to Server2 and becomes active there.
To achieve this you need shared storage. The Synology can be that storage. Whether it will be able to handle the load is for you to find out.

It sounds like you are trying to design a production system on the fly with production data. It can be done, but the adage "measure twice cut once" is multiplied 10x.

Watch some YouTube tutorials on PVE clustering and shared storage. Install a nested (virtual) PVE cluster and experiment there. Or, contact Proxmox Partner for Professional Services assistance.

Good luck

P.S. don't run 2-node cluster - its not a supported configuration. Either use 3 nodes, or read up on quorum/q-disk device.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Thank you.

I corrected the 'redundancy' wording. Wrong word... Actually, I re-wrote the whole thing... hopefully that will be easier to understand what I have and what I am trying to do... I just meant that if the server 1 fails, the server 2 does not have to be the exact live copy, if if the latest copy is a day old, that would be fine. I really want to set it, and forget it, so no independent servers are wanted, because then I would have to check if they are working, and there would be a conflict of IPs in the VMs which would be the same.

So okay, best option is to use a cluster with 3 nodes. Fine... however...

I've already watched some 25+ videos, but they are mostly for people who already know terms and proxmox better, and they are never really complete. I am quick to learn, so I did get a lot from it, but, I cannot be sure of which option to go with.

I am not sure which option would be better for my case and IF:
- Do I need only one SSD per server and share installation and VMs
- or if I really need 2 SSDs O
- or if only one SSD for install and VMs on Synology...

Obviously the best option would be to use the server 1, which is already up and working and add the server 2 and 3, since it currently has only ONE SSD, sharing install and the VMs. I would not know how to approach changing that to 2 SSDs, without losing the VMs that are already functioning.

I know I can do it, but I sincerely need a little direction on the above. Your advise would really help me accomplish this... I really dont have a budget for a specialist. :( Thank you again!
 
Last edited:
- Do I need only one SSD per server and share installation and VMs
There is no particular need to have two physical SSDs local to each server. It may make life easier to have two and isolate some data from the other. You can achieve "virtual" isolation by doing partitions. You can even run Ceph on a partition.
So to answer your question very simply - no you don't.
- or if I really need 2 SSDs O
No you don't.
- or if only one SSD for install and VMs on Synology...
Yes, you could do that. And if I were in your position with all the same restrictions, that's what I would probably do.

The high level would be to:
a) watch a few more videos on building a cluster. Don't worry about storage yet
b) add server 2 and 3 to server 1. Keep in mind that there should be no VMs on 2 and 3 at the time of addition.
c) add your NAS as shared storage (use GUI) to your cluster. Your choices would be NFS, CIFS or iSCSI. Each comes with its own limitations. The easiest and most natural for you is NFS. Don't over-think it. This is not a once in a lifetime choice. You can move to different protocol later.
d) use GUI "move disk" option to move your VM disks to new shared storage
e) enable HA
f) grab a cold one

I really dont have a budget for a specialist
You just have to try it. As I said - build a virtual lab for yourself, 3 PVE VMs dont take that many resources, and model your process there.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: jojo90014
You just have to try it. As I said - build a virtual lab for yourself, 3 PVE VMs dont take that many resources, and model your process there.
Just occurred to me, take those two new servers and build a test cluster there to model your steps. That way you can avoid disturbing your production. When you are comfortable, wipe and reinstall them to create production cluster.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: Kingneutron
There is no particular need to have two physical SSDs local to each server. It may make life easier to have two and isolate some data from the other. You can achieve "virtual" isolation by doing partitions. You can even run Ceph on a partition.
So to answer your question very simply - no you don't.

No you don't.

Yes, you could do that. And if I were in your position with all the same restrictions, that's what I would probably do.

The high level would be to:
a) watch a few more videos on building a cluster. Don't worry about storage yet
b) add server 2 and 3 to server 1. Keep in mind that there should be no VMs on 2 and 3 at the time of addition.
c) add your NAS as shared storage (use GUI) to your cluster. Your choices would be NFS, CIFS or iSCSI. Each comes with its own limitations. The easiest and most natural for you is NFS. Don't over-think it. This is not a once in a lifetime choice. You can move to different protocol later.
d) use GUI "move disk" option to move your VM disks to new shared storage
e) enable HA
f) grab a cold one


You just have to try it. As I said - build a virtual lab for yourself, 3 PVE VMs dont take that many resources, and model your process there.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

Great advice thank you, thank you so much! You gave me the push that was needed.

I know I could partition the SSDs, but Server one is already working... so I don't have that option now, unless I start over... it would be a shame to waste the 4GB, lol.

One of the best things you said was that I could try doing this as a VM first! I didnt think about doing that, even after watching a video where the guy did exactly that!!!! Thanks.

Quick question by the way, for the NAS shared storage (I have not seen videos about that yet), if my NAS is already being used as my backup server and photo storage, would I have to start all over to make it a VM NFS storage? If this will be needed, I won't even think abut that option.

In such case, for what you said, I could create the cluster using a shared SSD for install and VMs, am I right?
Or for ceph (which would be faster), without compromising my current VMs, I would probably need to add a SSD to my SERVER 1 (and waste the 4TB already there), and partition SSD in my other machines , so they can be added as clusters, correct?
(Hopefully different sizes storage won't matter? My third machine has a different storage size than the other 2.

Thank you. Let me watch more videos, while I wait for your kind advice. You're a godsent! I truly appreciate it!
 
  • Like
Reactions: Kingneutron
Quick question by the way, for the NAS shared storage (I have not seen videos about that yet), if my NAS is already being used as my backup server and photo storage, would I have to start all over to make it a VM NFS storage?
You can use NAS (NFS or CIFS) for many different applications and completely isolate the datasets from each other. Look into creating new Exports/Shares. As long as you have enough space, you will be fine.
In such case, for what you said, I could create the cluster using a shared SSD for install and VMs, am I right?
I don't know what you mean by "shared SSD".
The physical SSD in each server is NOT shared, its local. You may be able to implement a shared filesystem on top of the SSD, i.e. Ceph.
Or, perhaps, you meant SSD in your NAS? Again, technically its not the SSD that is shared but rather the filesystem on top of it is shared via file-sharing protocol (NFS/CIFS).
Or for ceph (which would be faster),
This statement is questionable. To get good performance out of Ceph you need good hardware and network, as well as good understanding of all the technologies involved.
Stick with NAS for now. There are a few Proxmox/NAS videos out there as well.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: jojo90014
You can use NAS (NFS or CIFS) for many different applications and completely isolate the datasets from each other. Look into creating new Exports/Shares. As long as you have enough space, you will be fine.

I don't know what you mean by "shared SSD".
The physical SSD in each server is NOT shared, its local. You may be able to implement a shared filesystem on top of the SSD, i.e. Ceph.
Or, perhaps, you meant SSD in your NAS? Again, technically its not the SSD that is shared but rather the filesystem on top of it is shared via file-sharing protocol (NFS/CIFS).

This statement is questionable. To get good performance out of Ceph you need good hardware and network, as well as good understanding of all the technologies involved.
Stick with NAS for now. There are a few Proxmox/NAS videos out there as well.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox

'SHARED SSD" I meant two partitions, same SSD, sorry.
Thank you, you rock! I will likely do that, starting with the NAS!


Let me give you the latest updates, maybe you would suggest a different route.

I did find in my stuff, 3 old 256gb SSDs! I am adding them to the new machines 2 and 3, so I can install proxmox in the smaller SSD and lave the larger one for local storage. Will also add the smaller SSD to machine 1, after I import the VMs to Machine 2, which I will make it into my new 'main' server because...

(I AM SO PROUD OF MYSELF) I was able to backup the current VMs into synology NFS, this way I can import them to Machine 2 (my future 'main server'. Once I am done with that, I will re-install proxmox in the smaller SSD into Machine 1 (the one I am currently using), and use it as node 2, also add node 3! This way, they all have separate local storage from the install storage, and I will be ready for whichever option I decide now or down the road, OR whatever YOU SUGGEST, now that I have a new configuration!

One last question I hope. I was installing prox on machine 2, and when it asked me for the storage to install, I selected the small SSD, but then I thought... let me ask before I mess it up... Which file system would you suggest for the proxmox install?

Thanks for giving me the push I needed. I feel like I am 12 again, when I first started with computers and Basic / Clipper / Cobol programming, lol... I am 47 now!

Best!-
 
  • Like
Reactions: Kingneutron

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!