New to Proxmox..

rocket5618

New Member
Dec 25, 2025
2
0
1
Hey guys, hope you're all well!

A while back I used to run a small operation using Nutanix, but it became far too costly to run, and their support were eventually insistent upon forcing upgrades in return for retaining support. Not only was this risky (imo) but it led to quite a few major issues, which isn't good on live client Servers! The expense was far too high, and it just ended up not viable for a small business so I moved on..

Recently I became aware of this system, and it looks fantastic. I've watched lots of videos, but I had a few questions for those upselling this as hosting, and willing to assist:

* Can you create isolated remote access, i.e. to an internal IT admin, in order for them to mount ISOs and upgrade their own virtual machines?
* For those with Standard support subscriptions, are you able to indefinitely run on the same version and still receive support? (I don't want to keep being forced to upgrade live systems when they're running perfectly fine, aside from SERIOUS vulnerabilities)
* How reliable is the failover in a 3 node cluster, and how fast will it fail over should a node go down? Is it also easy enough to force replication once the third node is back in operation?


Thanks!
 
* For those with Standard support subscriptions, are you able to indefinitely run on the same version and still receive support? (I don't want to keep being forced to upgrade live systems when they're running perfectly fine, aside from SERIOUS vulnerabilities)
I almost absolutely sure that the answer is no: https://pve.proxmox.com/pve-docs/chapter-pve-faq.html#faq-support-table . But you need to ask Proxmox sales office or the partner that you (intend to) use to a definite answer.
* ..., and how fast will it fail over should a node go down?
At least one minute: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#ha_manager_fencing

EDIT: Regarding how reliable it works, you can set it up (with all features) for free and test your work-loads and scenarios yourself to make sure it lives up to your expectations.
 
Last edited:
  • Like
Reactions: rocket5618
Can you create isolated remote access, i.e. to an internal IT admin, in order for them to mount ISOs and upgrade their own virtual machines?
yes. see https://pve.proxmox.com/wiki/User_Management#pveum_permission_management
For those with Standard support subscriptions, are you able to indefinitely run on the same version and still receive support? (I don't want to keep being forced to upgrade live systems when they're running perfectly fine, aside from SERIOUS vulnerabilities)
you will not be DENIED support, but the first answer to any issue would be "make all cluster members the same version." This isnt unique to Nutanix or Proxmox; its just the design criteria of the software- more to the point, consider running nodes of different major point release in the same cluster NOT supportable. Having said that, be aware that the PVE version support cycle isnt very long lived; see https://forum.proxmox.com/threads/proxmox-ve-support-lifecycle.35755/ so expect to be more hands on for the lifespan of your environment- be prepared to lab major point release upgrades.

How reliable is the failover in a 3 node cluster, and how fast will it fail over should a node go down?
As reliable as your environment, and as fast as the network and storage topologies allow. 60-90sec is common with proper shared storage. since you're coming from Nutanix I assume you intend to use ceph- make sure to familiarize yourself with how it works and how to get optimal results. This forum has very good resources available for you for that- you can start here: https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/

Is it also easy enough to force replication once the third node is back in operation?
It depends on what you mean, and what storage solution you deploy. Cluster metadata will automatically synchronize when a node rejoins the cluster, and storage (assuming ceph) will automatically redistribute any pending writes back to any OSDs are they return to the cluster. It is transparent to the user.
 
  • Like
Reactions: leesteken