Search results

  1. B

    Memory usage accuracy has never been accurate and I still cannot find why after many years

    I'm rocking PVE v8.2.7 in my cluster, and I've upgraded it in-place since v2.3, and in all this time (over a decade?) I have almost never found the "Memory usage" metric at the webGUI scope of VMs to be accurate. I namely run Linux VMs, but even for Windows, FreeBSD, whatever, it's _never_...
  2. B

    Question for PVE Staff/Devs - Enabling cephadm, what are the known (with evidence) problems?

    As I said, I'm trying to get insights from the Devs here, and not opinions but facts with evidence. I don't really see how that was unclear.
  3. B

    Question for PVE Staff/Devs - Enabling cephadm, what are the known (with evidence) problems?

    Hey Folks, Hoping to get some response from the Proxmox Staff/Devs on this. Namely looking for evidence of known problems in a particular scenario. I'm currently building out a PoC for 3x PVE cluster, whereby it also provisions a Ceph cluster, and I extend that Ceph Cluster to provide NFS HA...
  4. B

    zram - why bother?

    I want to clarify a bit on what I mean by "Swap, in my professional experience and opinion, should only ever be used as last resort.". Swap should (almost) _NEVER_ be turned off, except maybe special circumstances like on kubernetes nodes where it is part of the recommended architecting (by the...
  5. B

    zram - why bother?

    I've been working with Linux for 20+ years and using swap for more than emergency situations has tangible performance and wear-level costs. I am in the real world, just like you, and it is my responsibility to deal with aspects like this of architecture. Consider for a moment what forum we're...
  6. B

    zram - why bother?

    Sure, and none of that warrants ZRAM IMO. Also, you don't need swap to enable rapid RAM loading/unloading, that would actually substantially slow it down as the swap device itself would be a huge bottleneck relative to the performance of said RAM. It would be substantially more sensible to just...
  7. B

    Node elevated CLI using Proxmox VE Authentication Server, can't see how

    Well the LDAP aspect is more asking ahead on the topic, not something for immediately today. That being said, I really was hoping that the PVE Cluster environment would manage PAM on behalf of the admins for extending PAM to work with Proxmox VE Authentication Server by default. And if others...
  8. B

    Node elevated CLI using Proxmox VE Authentication Server, can't see how

    I'm trying to figure out how to use accounts managed by the Proxmox VE Authentication Server in such a way that I can get CLI access (elevated/sudo/whatever) to the nodes in the cluster. Whether it's all nodes, limited notes, or whatever. I can't find anyone talking about this so far, and the...
  9. B

    Default settings of containers and virtual machines

    Ahh nice! I can see the appeal of APIs, but there's use-cases I have where the webGUI is the method used, hence interest in this. :)
  10. B

    Live Migrate VMs that use one or more SDNs -> Workable?

    I know this is probably an open-ended question, but I'm trying to do upgrade and other planning for a client who has a lot of SDNs going on within their PVE Cluster. There's a few servers in the same rack, connected to I think the same switch (switching equipment is not handled by me), and so...
  11. B

    Default settings of containers and virtual machines

    I'm talking about configurations at the hypervisor level, not the guestOS level. This supersedes network boot for anything as that's post-BIOS init. I'm talking about changing the default configurations when trying to create a new LXC/VM within Proxmox VE, the defaults as Proxmox sees them. I...
  12. B

    zram - why bother?

    I've exhaustively worked through proving in multiple environments that leaving data in swap tangibly reduces performance of whatever system is doing it, as well as impacting other systems in the same environment that may or may not have anything in its own swap. Whether it's Windows, Linux, or...
  13. B

    zram - why bother?

    Well to clarify the assumption to me is that swap should generally be empty day to day, and it should be available as "only last resort when things really start getting angry". So to me I would rather not use any RAM at all that is managed by a service (zram the service) and instead plan...
  14. B

    Default settings of containers and virtual machines

    Network booting has no real capability of configuring the VM/LXC objects within Proxmox VE itself, which really is what this is about. I certainly love the ability in Open Source stuff to write our own integrations/automations, but I do believe genuinely long-term this is worthwhile for the...
  15. B

    zram - why bother?

    Welp guess that was a misunderstanding on my part, thanks for clarifying. :) Okay so apart from trying to get ahead of OOM scenarios... is there _any_ other reason to use zram instead of just swap on disk?
  16. B

    zram - why bother?

    I wasn't talking about running without swap, where did you get that impression? To me using RAM to replace RAM when you're in an OOM statement is a liability for operations. I'd rather use a local SSD for swap when absolutely necessary (last resort) and day to day just install more RAM or...
  17. B

    zram - why bother?

    On PVE Nodes with plenty of RAM (not even close to running out), why even bother with zram? I've inherited an environment with zram present on some of the PVE Nodes in the cluster, and it seems completely redundant. RAM used to... provide RAM when you're out of RAM? What? So far all the...
  18. B

    pvescheduler doesn't retry to start after timeout from PVE Node power loss

    We just did some validation on some new PVE Nodes (and related switching) for how they handle total power loss. Everything seemed to come up just fine, except on both of the nodes pvescheduler tries to start up, and times out (after 2 minutes?) then never tries to start back up again. I...
  19. B

    CephFS mount breaks with network connectivity interruption, does not fix when network resumes

    I'm not entirely sure what syntax/structure to use to modify the /etc/pve/storage.cfg to apply this, and the webGUI does not offer a nice field for adding mount options. How should I modify said config file to make that work?
  20. B

    CephFS mount breaks with network connectivity interruption, does not fix when network resumes

    Tried to GoogleFu and didn't seem to find anyone else having this problem. Relevant PVE cluster is like 11 nodes, 3x of which run a Ceph cluster serving RBD and FS. All nodes in the cluster use the RBD storage and FS storage. VMdisks are allowed for RBD but not FS, FS is primarily ISOs in this...