Search results

  1. B

    zram - why bother?

    Welp guess that was a misunderstanding on my part, thanks for clarifying. :) Okay so apart from trying to get ahead of OOM scenarios... is there _any_ other reason to use zram instead of just swap on disk?
  2. B

    zram - why bother?

    I wasn't talking about running without swap, where did you get that impression? To me using RAM to replace RAM when you're in an OOM statement is a liability for operations. I'd rather use a local SSD for swap when absolutely necessary (last resort) and day to day just install more RAM or...
  3. B

    zram - why bother?

    On PVE Nodes with plenty of RAM (not even close to running out), why even bother with zram? I've inherited an environment with zram present on some of the PVE Nodes in the cluster, and it seems completely redundant. RAM used to... provide RAM when you're out of RAM? What? So far all the...
  4. B

    pvescheduler doesn't retry to start after timeout from PVE Node power loss

    We just did some validation on some new PVE Nodes (and related switching) for how they handle total power loss. Everything seemed to come up just fine, except on both of the nodes pvescheduler tries to start up, and times out (after 2 minutes?) then never tries to start back up again. I...
  5. B

    CephFS mount breaks with network connectivity interruption, does not fix when network resumes

    I'm not entirely sure what syntax/structure to use to modify the /etc/pve/storage.cfg to apply this, and the webGUI does not offer a nice field for adding mount options. How should I modify said config file to make that work?
  6. B

    CephFS mount breaks with network connectivity interruption, does not fix when network resumes

    Tried to GoogleFu and didn't seem to find anyone else having this problem. Relevant PVE cluster is like 11 nodes, 3x of which run a Ceph cluster serving RBD and FS. All nodes in the cluster use the RBD storage and FS storage. VMdisks are allowed for RBD but not FS, FS is primarily ISOs in this...
  7. B

    Proxmox Offline Mirror released!

    For the sake of curiosity, what kind of log insights would be useful for you? Jumping in half-way here so sorry if I'm not up to speed on full context :P
  8. B

    Angry cluster config, now getting lots of 400 errors from web UI

    I can't speak for fabian, but in my case I _THINK_ it might have been because I added two VMs to a Resource Pool (when they previously were not a member of any Resource Pool). But the actual reason really was not clear. CTRL+F5 clearing the cache in my case seemed to do the "magical trick". As...
  9. B

    [SOLVED] Issue with Web interface and Pools (poolid error)

    For future humans clearing browser cache can help: https://forum.proxmox.com/threads/angry-cluster-config-now-getting-lots-of-400-errors-from-web-ui.137613/#post-645114
  10. B

    [SOLVED] New install errors 10 to 15 minutes after boot

    For future humans clearing browser cache can help: https://forum.proxmox.com/threads/angry-cluster-config-now-getting-lots-of-400-errors-from-web-ui.137613/#post-645114
  11. B

    Angry cluster config, now getting lots of 400 errors from web UI

    Holy balls I can't believe clearing browser cache fixed this problem for me. THANK YOU FOR POSTING THIS! This is INSANE that this fixed it! It's as insane that the default solution in this thread is to wipe all nodes and rebuild... just for something like this... I have never seen this problem...
  12. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    Yeah doing the same method by IPs for all nodes on all nodes did the trick. Hoping this has a proper solution soon that is automated by the cluster. Hopefully someone gets helped by this work-around for now.
  13. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    oh it's because the webGUI uses IP not hostname... yay okay redo this work then....
  14. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    Ugh this didn't even properly fix it anyways... randomly logged into webGUI of a rando node and tried to webCLI Shell to one of the new nodes and still asked me for fingerprint... fuck this is so stupid
  15. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    And yes I have each node even connect to itself because I don't currently see a reason not to do that...
  16. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    For future human purposes: I created a list of commands containing all the nodes in it, and executed this on every node via CLI. THIS IS NOT THE IDEAL WAY TO DO THIS AND I KNOW IT IS A BAD PRACTICE BUT FOR NOW THIS IS GOOD ENOUGH: ssh -o StrictHostKeyChecking=accept-new -t HostName1 'exit' ssh...
  17. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    Okay I actually need to ssh FROM every node in the cluster TO every node in the cluster to generate the known_hosts trust. guh this is a cluster-truck.
  18. B

    SSH Key problems for new PVE Node joining existing cluster that had its hostname renamed before joining

    Feel free to move along and not even post if you don't see value in participating. I was simply asking for the proper process to do something that wasn't documented. And you're here to do... what exactly? Lecture me because I didn't say exactly what you wanted to hear? Why are you even bothering...
  19. B

    SSH Key problems for new PVE Node joining existing cluster that had its hostname renamed before joining

    Are you just married to trying to lecture me? I'm going to ignore you because you seem more fixated on me fitting into your box of "help" than trying to actually get anything accomplished done. Go away troll, I have actual work to do, not placate your pedantic ego stroking.
  20. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    omfg and that method doesn't even actually solve the problem, this is just such a shit show... I just wanted to add two new PVE Nodes to this cluster and I'm burning way too much time on this SSH BS that should've been solved months ago >:|

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!