Search results

  1. B

    Live Migrate VMs that use one or more SDNs -> Workable?

    I know this is probably an open-ended question, but I'm trying to do upgrade and other planning for a client who has a lot of SDNs going on within their PVE Cluster. There's a few servers in the same rack, connected to I think the same switch (switching equipment is not handled by me), and so...
  2. B

    Default settings of containers and virtual machines

    I'm talking about configurations at the hypervisor level, not the guestOS level. This supersedes network boot for anything as that's post-BIOS init. I'm talking about changing the default configurations when trying to create a new LXC/VM within Proxmox VE, the defaults as Proxmox sees them. I...
  3. B

    zram - why bother?

    I've exhaustively worked through proving in multiple environments that leaving data in swap tangibly reduces performance of whatever system is doing it, as well as impacting other systems in the same environment that may or may not have anything in its own swap. Whether it's Windows, Linux, or...
  4. B

    zram - why bother?

    Well to clarify the assumption to me is that swap should generally be empty day to day, and it should be available as "only last resort when things really start getting angry". So to me I would rather not use any RAM at all that is managed by a service (zram the service) and instead plan...
  5. B

    Default settings of containers and virtual machines

    Network booting has no real capability of configuring the VM/LXC objects within Proxmox VE itself, which really is what this is about. I certainly love the ability in Open Source stuff to write our own integrations/automations, but I do believe genuinely long-term this is worthwhile for the...
  6. B

    zram - why bother?

    Welp guess that was a misunderstanding on my part, thanks for clarifying. :) Okay so apart from trying to get ahead of OOM scenarios... is there _any_ other reason to use zram instead of just swap on disk?
  7. B

    zram - why bother?

    I wasn't talking about running without swap, where did you get that impression? To me using RAM to replace RAM when you're in an OOM statement is a liability for operations. I'd rather use a local SSD for swap when absolutely necessary (last resort) and day to day just install more RAM or...
  8. B

    zram - why bother?

    On PVE Nodes with plenty of RAM (not even close to running out), why even bother with zram? I've inherited an environment with zram present on some of the PVE Nodes in the cluster, and it seems completely redundant. RAM used to... provide RAM when you're out of RAM? What? So far all the...
  9. B

    pvescheduler doesn't retry to start after timeout from PVE Node power loss

    We just did some validation on some new PVE Nodes (and related switching) for how they handle total power loss. Everything seemed to come up just fine, except on both of the nodes pvescheduler tries to start up, and times out (after 2 minutes?) then never tries to start back up again. I...
  10. B

    CephFS mount breaks with network connectivity interruption, does not fix when network resumes

    I'm not entirely sure what syntax/structure to use to modify the /etc/pve/storage.cfg to apply this, and the webGUI does not offer a nice field for adding mount options. How should I modify said config file to make that work?
  11. B

    CephFS mount breaks with network connectivity interruption, does not fix when network resumes

    Tried to GoogleFu and didn't seem to find anyone else having this problem. Relevant PVE cluster is like 11 nodes, 3x of which run a Ceph cluster serving RBD and FS. All nodes in the cluster use the RBD storage and FS storage. VMdisks are allowed for RBD but not FS, FS is primarily ISOs in this...
  12. B

    Proxmox Offline Mirror released!

    For the sake of curiosity, what kind of log insights would be useful for you? Jumping in half-way here so sorry if I'm not up to speed on full context :P
  13. B

    Angry cluster config, now getting lots of 400 errors from web UI

    I can't speak for fabian, but in my case I _THINK_ it might have been because I added two VMs to a Resource Pool (when they previously were not a member of any Resource Pool). But the actual reason really was not clear. CTRL+F5 clearing the cache in my case seemed to do the "magical trick". As...
  14. B

    [SOLVED] Issue with Web interface and Pools (poolid error)

    For future humans clearing browser cache can help: https://forum.proxmox.com/threads/angry-cluster-config-now-getting-lots-of-400-errors-from-web-ui.137613/#post-645114
  15. B

    [SOLVED] New install errors 10 to 15 minutes after boot

    For future humans clearing browser cache can help: https://forum.proxmox.com/threads/angry-cluster-config-now-getting-lots-of-400-errors-from-web-ui.137613/#post-645114
  16. B

    Angry cluster config, now getting lots of 400 errors from web UI

    Holy balls I can't believe clearing browser cache fixed this problem for me. THANK YOU FOR POSTING THIS! This is INSANE that this fixed it! It's as insane that the default solution in this thread is to wipe all nodes and rebuild... just for something like this... I have never seen this problem...
  17. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    Yeah doing the same method by IPs for all nodes on all nodes did the trick. Hoping this has a proper solution soon that is automated by the cluster. Hopefully someone gets helped by this work-around for now.
  18. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    oh it's because the webGUI uses IP not hostname... yay okay redo this work then....
  19. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    Ugh this didn't even properly fix it anyways... randomly logged into webGUI of a rando node and tried to webCLI Shell to one of the new nodes and still asked me for fingerprint... fuck this is so stupid
  20. B

    New nodes joined to existing cluster, 8.2 updates making SSH key trusts not auto fixed and manual steps needed

    And yes I have each node even connect to itself because I don't currently see a reason not to do that...