Welp guess that was a misunderstanding on my part, thanks for clarifying. :)
Okay so apart from trying to get ahead of OOM scenarios... is there _any_ other reason to use zram instead of just swap on disk?
I wasn't talking about running without swap, where did you get that impression? To me using RAM to replace RAM when you're in an OOM statement is a liability for operations. I'd rather use a local SSD for swap when absolutely necessary (last resort) and day to day just install more RAM or...
On PVE Nodes with plenty of RAM (not even close to running out), why even bother with zram?
I've inherited an environment with zram present on some of the PVE Nodes in the cluster, and it seems completely redundant. RAM used to... provide RAM when you're out of RAM? What?
So far all the...
We just did some validation on some new PVE Nodes (and related switching) for how they handle total power loss.
Everything seemed to come up just fine, except on both of the nodes pvescheduler tries to start up, and times out (after 2 minutes?) then never tries to start back up again.
I...
I'm not entirely sure what syntax/structure to use to modify the /etc/pve/storage.cfg to apply this, and the webGUI does not offer a nice field for adding mount options. How should I modify said config file to make that work?
Tried to GoogleFu and didn't seem to find anyone else having this problem.
Relevant PVE cluster is like 11 nodes, 3x of which run a Ceph cluster serving RBD and FS.
All nodes in the cluster use the RBD storage and FS storage. VMdisks are allowed for RBD but not FS, FS is primarily ISOs in this...
For the sake of curiosity, what kind of log insights would be useful for you? Jumping in half-way here so sorry if I'm not up to speed on full context :P
I can't speak for fabian, but in my case I _THINK_ it might have been because I added two VMs to a Resource Pool (when they previously were not a member of any Resource Pool). But the actual reason really was not clear. CTRL+F5 clearing the cache in my case seemed to do the "magical trick". As...
For future humans clearing browser cache can help: https://forum.proxmox.com/threads/angry-cluster-config-now-getting-lots-of-400-errors-from-web-ui.137613/#post-645114
For future humans clearing browser cache can help: https://forum.proxmox.com/threads/angry-cluster-config-now-getting-lots-of-400-errors-from-web-ui.137613/#post-645114
Holy balls I can't believe clearing browser cache fixed this problem for me. THANK YOU FOR POSTING THIS! This is INSANE that this fixed it!
It's as insane that the default solution in this thread is to wipe all nodes and rebuild... just for something like this... I have never seen this problem...
Yeah doing the same method by IPs for all nodes on all nodes did the trick. Hoping this has a proper solution soon that is automated by the cluster. Hopefully someone gets helped by this work-around for now.
Ugh this didn't even properly fix it anyways... randomly logged into webGUI of a rando node and tried to webCLI Shell to one of the new nodes and still asked me for fingerprint... fuck this is so stupid
For future human purposes:
I created a list of commands containing all the nodes in it, and executed this on every node via CLI. THIS IS NOT THE IDEAL WAY TO DO THIS AND I KNOW IT IS A BAD PRACTICE BUT FOR NOW THIS IS GOOD ENOUGH:
ssh -o StrictHostKeyChecking=accept-new -t HostName1 'exit'
ssh...
Okay I actually need to ssh FROM every node in the cluster TO every node in the cluster to generate the known_hosts trust. guh this is a cluster-truck.
Feel free to move along and not even post if you don't see value in participating. I was simply asking for the proper process to do something that wasn't documented. And you're here to do... what exactly? Lecture me because I didn't say exactly what you wanted to hear? Why are you even bothering...
Are you just married to trying to lecture me? I'm going to ignore you because you seem more fixated on me fitting into your box of "help" than trying to actually get anything accomplished done. Go away troll, I have actual work to do, not placate your pedantic ego stroking.
omfg and that method doesn't even actually solve the problem, this is just such a shit show... I just wanted to add two new PVE Nodes to this cluster and I'm burning way too much time on this SSH BS that should've been solved months ago >:|
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.