Recent content by Binary Bandit

  1. B

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Love it! My thanks to all for keeping this alive. Proxmox 7 support is nearing its end, so I'm looking into a direct upgrade to 8. I see two choices. 1) Remove OMSA 10.1 (see how I did this install here.) / move to SNMP monitoring IDRAC or learn checkmk. 2) Update to OMSA 10.3 via the...
  2. B

    Ceph PGs reported too high when they are exactly what is requested

    Morning from PST Shanreich, Thank you for the response. We're running Ceph 16 / Pacific. I posted all of our versions below. Looks like David / #7 on the bug URL (thank you for that) is reporting this issue with the exact version we are using. I've spent several hours looking through...
  3. B

    Ceph PGs reported too high when they are exactly what is requested

    Hi All, I just patched our Proxmox 7 cluster to the latest version. After this "ceph health detail" reports: HEALTH_WARN 2 pools have too many placement groups [WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups Pool device_health_metrics has 1 placement groups, should have 1...
  4. B

    understanding ZFS RAM usage

    Thanks leesteken, that checks out. Here are updated numbers for column B: For those who find this thread, I obtained the updated Exchange and DC VM numbers by opening Windows task manager on each and adding the RAM "In use" to "Cached". Now I can see that the numbers line up. The remaining...
  5. B

    understanding ZFS RAM usage

    Hi All, Is there a command or two that illustrates where RAM is being consumed on our Proxmox systems that are using ZFS? For example, here is the RAM usage on a new system: Column B is my math showing expectations with my current understanding of what is using RAM ... This server has 128MB...
  6. B

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Interesting posts. Mine is crashing within a couple of hours of a 3:15AM backup ... every time. I'll likely move to Win 2019 for this install but will watch this thread.
  7. B

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Throwing my hat in the ring for this issue ... well it looks like the same issue to me. Let me know what I can contribute. We're running a single Windows server 2022 VM on a new Dell T350 and just experienced this issue this morning, early AM. A backup job finished at 3:39:52 and then the...
  8. B

    Ceph librbd vs krbd

    Hi All, We just experienced a bug that caused us to switch to krbd. Is there a good reason to switch back once the bug is resolved? It seems that krbd might be faster and I don't see any features that I'm giving up. best, James
  9. B

    Possible bug after upgrading to 7.2: VM freeze if backing up large disks

    Morning from PST all, Just a note to perhaps help someone else experiencing this frustrating issue. We experienced out 1.5 hour multi-VM backup (Ceph and Proxmox's built in backup, not PBS) suddenly changing to 12+ hours. On top of that the VMs with the largest disks (750GB) would drop in...
  10. B

    move disk on live VM causes cluster node to reboot

    Hi All, To keep with our timeline we're going to back up and restore from shared storage ... I'm not planning on troubleshooting this. Just an FYI to any posters trying to help. best, James
  11. B

    [SOLVED] Unable to properly remove node from cluster

    Just wanted to confirm that this is an overzealous error message ... same exact issue for our 5 node cluster (7.0) when shrinking it to 3 nodes.
  12. B

    move disk on live VM causes cluster node to reboot

    Hello all, I'm trying to figure out why using move disk on a live VM causes one of our cluster nodes to reboot. We are looking to migrate a live VM from CEPH to LVM storage. The reason being that this will then enable us to live migrate the VM to a non-CEPH attached node. When we do this...
  13. B

    SWAP usage

    Thanks avw, Another backup triggered last night and swap is holding at about 6.5 or 8GB used. There is a significantly larger amount of storage on the node that has the 6.5GB of swap usage vs other cluster nodes. My guess is that this is why SWAP is used ... based on what you are saying...
  14. B

    SWAP usage

    Here's what I see in Proxmox ... this is Max memory for a day with a dot on the time that the problem started: Zabbix: It looks a lot like RAM is being moved to disk for some reason. That dip two days ago is the first time the backup ran.
  15. B

    SWAP usage

    Hi All, We recently started using the built in backup with version 7 of Proxmox. When the backup runs is pushes up swap usage. Should I be concerned? This is the result of the backup running through all of the VMs on this node (1 of 3) in our cluster. Well, to be clear, I'm correlating the...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!