This is fixed [0], but not included in the released update you mentioned.
Check the "What's new" section the play store where we will mention this as we did previously, but thanks for testing it again highly appreciated.
[0]...
This is still in evaluation.
There are some legal questions which have to be considered, because aGPL as GPL apps are not exactly conform
with the Apple store's terms of service[0].
[0] https://www.fsf.org/blogs/licensing/more-about-the-app-store-gpl-enforcement
Will definitely take a look at this one, probably some usage stat from your iscsi is missing and therefore when storage tiles are visibile it errors out.
Nothing planned now, but very likely to happen.
If you are curious check out the source at git.proxmox.com, as always contribution is highly appreciated.
Sorry but PVE 5 isn't supported anymore, but with the next update you will get this info right in the app on login when you try to connect to a host which doesn't has the required PVE version.
Updating your hosts to a recent 6.2-x version should fix the issue.
This is caused by cpulimit which should return as num, but is returned as String from the API.
WIll be fixed in the next app update, thanks for reporting.
- using a custom port should work, just add the port to your ip/hostname e.g. hostname:1234
- aSpice does not support the virt-viewer mime type, because the developer decided that this is only supported via the paid version of his app (Opaque)
- the sites screen displays your active sessions...
Yeah the easiest way to do this is to reinstall.
As said, your cluster lost quorum so you need to set the expected votes to 1 so you can apply changes.
Maybe you should read a little bit in our admin guide to get a better understanding about the whole cluster idea and what is meant by quorum...
try to follow these steps: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_remove_a_cluster_node
You'll need to set
pvecm expected 1
Your cluster has already lost quorum after your removal attempt, I guess.
The easiest way would be a reinstall Proxmox VE on both nodes.
Out of the box gibt es keine Möglichkeit die Cluster wie beschrieben zu verbinden, aber man könnte abhängig vom verwendeten Storage sehr wohl eine Desaster Recovery Strategie ermöglichen.
Ein Beispiel wäre wenn sowohl Cluster A als auch Cluster B einen Ceph Storage hätten, hier könnte man Ceph...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.