GUI only node

ropeguru

Member
Nov 18, 2019
37
2
13
64
So one of the issues I have with Proxmox is the lack of a GUI that is accessible no matter which node is rebooted. For instance, is I happen to log into node 1, of three nodes, and I reboot that node, I then have to go through all the hassle of bringing up a new brower connection to another node to monitor progress.

Is there any way to create an instance which could be set to never have any VM's in order to use it as a management instance.

I started testing this on just a small VM by installing Proxmox, which btw give you a third node for quorum, then creating a group with the two useable nodes and putting all of my VM's in that group to create affinity to just hose two nodes. Seems to work, but one still gets the management VM as an option to add new instances to.

Has there been any thought on creating a management only gui?
 
Is there any way to create an instance which could be set to never have any VM's in order to use it as a management instance.

If this "management VM" has no support for VM or Containers, you cannot move a disk to it (every try to do this will fail).
It's not perfect, but it'll work technically. It will however still show up on any create/migrate dialog. Is this what you already tried?

So one of the issues I have with Proxmox is the lack of a GUI that is accessible no matter which node is rebooted. For instance, is I happen to log into node 1, of three nodes, and I reboot that node, I then have to go through all the hassle of bringing up a new brower connection to another node to monitor progress.

Is this different on other hypervisors? I hear you, but how often does this "problem" arises? I've been using PVE for over 4 years not and I never ran in such a problem. Of course, you need to relogin once if you time it correctly, so that is what I always do.
 
If this "management VM" has no support for VM or Containers, you cannot move a disk to it (every try to do this will fail).
It's not perfect, but it'll work technically. It will however still show up on any create/migrate dialog. Is this what you already tried?

Yes, this is what I have tried. All of my storage backing is via NFS, so even that gets attached when adding to the cluster.



Is this different on other hypervisors? I hear you, but how often does this "problem" arises? I've been using PVE for over 4 years not and I never ran in such a problem. Of course, you need to relogin once if you time it correctly, so that is what I always do.

Yes, other setups have a VM or bare metal server that runs which has no capability for hosting VM's. They are strictly for management of host nodes, HA, and other configs.

I suggest you take a look at Xen Orchestra, VMware vCenter, and oVirt's hosted engine to understand more of what I am talking about. They are seperate servers that run as a "mangement" gui only. So no matter what, you are never logged directly into a node.

If you run them as a VM, then during maintenance of a node, the VM just gets hot migrated to another node and you do not have to log back in.
 
Last edited:
Could you not use a reverse proxy with multiple backends to facilitate a more transparent jump when you reboot the node you are controlling Proxmox from? I have an Nginx VM that does just that though I will admit the noVNC portion of my setup is not 100% when connected over the proxy.
 
  • Like
Reactions: gurubert
Could you not use a reverse proxy with multiple backends to facilitate a more transparent jump when you reboot the node you are controlling Proxmox from? I have an Nginx VM that does just that though I will admit the noVNC portion of my setup is not 100% when connected over the proxy.

Yes, I have that too, but you still have to relogin, there is no authentication failover.
 
Yes, this is what I have tried. All of my storage backing is via NFS, so even that gets attached when adding to the cluster.

You can explicitly omit a node, if you don't want to have the shared storage there.

I suggest you take a look at Xen Orchestra, VMware vCenter, and oVirt's hosted engine to understand more of what I am talking about. They are seperate servers that run as a "mangement" gui only. So no matter what, you are never logged directly into a node.

If you run them as a VM, then during maintenance of a node, the VM just gets hot migrated to another node and you do not have to log back in.

Ah yes. There is nothing from Proxmox that will do that. There are however other projects out there that want to build something like that, also for multiple-cluster configuration. It's in german, but it has screenshots: https://forum.proxmox.com/threads/proxmox-und-cockpit.54370/
 
You can explicitly omit a node, if you don't want to have the shared storage there.



Ah yes. There is nothing from Proxmox that will do that. There are however other projects out there that want to build something like that, also for multiple-cluster configuration. It's in german, but it has screenshots: https://forum.proxmox.com/threads/proxmox-und-cockpit.54370/

Thanks for the link. Looks promising but the GUI still just connects to a single node, so if it is rebooted, then you are back to the same issue as logging into a node directly.

I might play around with the proxmox GUI setup on an OS and just not put any VM's there. Will probably take a bit of tweaking, but may be able to get it to work.
 
Just wanted to circle back around on this setup.

So I have a two node cluster, which actually hosts vm's, and then added a vm as a management node. By strategic naming, when I now doing a task which requires choosing a host, I always see either the first or second node as the first choice.

The management node does not have any external storage attached, nor does it have the same network config as the other two nodes. So far every thing work well including when shutting down the host the management node is running on, it migrates to the other node and I have no connection loss.

The other benefit I get with only having a two node cluster, is that the management only node acts as a quorum device so I never fall below the required device count for the cluster.

The last thing that would be a nice to have in order to make this work even better, would be on a node reboot, not just the shutdown, all the VM's are forced to migrate. Would just save a few clicks when doing upgrades and also remove any accidental vm downtime issues if they admin accidentally reboots without manually migrating.
 
would be on a node reboot, not just the shutdown, all the VM's are forced to migrate.

Yes, this is easier than said. We discussed it plenty of times (just search the forums) here the worst case:

You cannot guarantee that the other node can handle the VMs and a blind switchover will instantly kill your whole cluster.

Therefore, the Proxmox staff will not implement something that can brick your cluster, so a better but much more complex setup is required with failover classes (e.g. important, not important) or importance numbers or something like that, that can be evaluated in case of shortage. But shortage is also depended on many things: storage, memory, cpu and even swap.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!