Search results

  1. B

    How to locate most efficient CPU vs your host CPU

    That was my thought as well. We'll be using "Broadwell-noTSX-IBRS" for now. There are big performance gains vs. the "KVM 64" CPU, and I can drop a new cluster node without having to match the CPU down to the microcode exactly.
  2. B

    How to locate most efficient CPU vs your host CPU

    @bbgeek17, thank you. What you suggest is next on my list to understand. I stopped after seeing "If you care about live migration and security, and you have only Intel CPUs or only AMD CPUs, choose the lowest generation CPU model of your cluster." in the documentation here. Is this as simple as...
  3. B

    How to locate most efficient CPU vs your host CPU

    Hi All, I want to use the newest processor type for our VMs. After some digging around on the Internet, reading this, and figuring out that our CPU ... an E5-2667 v4 ... is from the Broadwell family ... and running "kvm -cpu help" for the console, I see "x86 Broadwell-v4 Intel Core...
  4. B

    No Networking After Upgrade to 8.2

    When I upgraded my PVE 7 to 8 cluster, I read and re-read the instructions over several days. I've never felt the need to do this for any sub-version (7.x, for example) upgrade. I'll now be reading upgrade notes. I'm grateful to have found this thread, but I'm definitely frustrated.
  5. B

    Ceph PGs reported too high when they are exactly what is requested

    Hello everyone, This is just a bit of encouragement for first-time Ceph upgraders on PVE7. About a week ago, I upgraded our 3-node cluster per the official instructions here. It went smoothly with no issues. Just be sure to read everything carefully. Oh, and the bug described here is, of...
  6. B

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Hi All, For anyone who installed OMSA on PVE7, as I did here, this is just a note to let you know that an in-place upgrade seems to work. Our hyper-converged Ceph cluster has been running well for 24 hours. I'll post back I there are issues. best, James
  7. B

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Love it! My thanks to all for keeping this alive. Proxmox 7 support is nearing its end, so I'm looking into a direct upgrade to 8. I see two choices. 1) Remove OMSA 10.1 (see how I did this install here.) / move to SNMP monitoring IDRAC or learn checkmk. 2) Update to OMSA 10.3 via the...
  8. B

    Ceph PGs reported too high when they are exactly what is requested

    Morning from PST Shanreich, Thank you for the response. We're running Ceph 16 / Pacific. I posted all of our versions below. Looks like David / #7 on the bug URL (thank you for that) is reporting this issue with the exact version we are using. I've spent several hours looking through...
  9. B

    Ceph PGs reported too high when they are exactly what is requested

    Hi All, I just patched our Proxmox 7 cluster to the latest version. After this "ceph health detail" reports: HEALTH_WARN 2 pools have too many placement groups [WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups Pool device_health_metrics has 1 placement groups, should have 1...
  10. B

    understanding ZFS RAM usage

    Thanks leesteken, that checks out. Here are updated numbers for column B: For those who find this thread, I obtained the updated Exchange and DC VM numbers by opening Windows task manager on each and adding the RAM "In use" to "Cached". Now I can see that the numbers line up. The remaining...
  11. B

    understanding ZFS RAM usage

    Hi All, Is there a command or two that illustrates where RAM is being consumed on our Proxmox systems that are using ZFS? For example, here is the RAM usage on a new system: Column B is my math showing expectations with my current understanding of what is using RAM ... This server has 128MB...
  12. B

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Interesting posts. Mine is crashing within a couple of hours of a 3:15AM backup ... every time. I'll likely move to Win 2019 for this install but will watch this thread.
  13. B

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Throwing my hat in the ring for this issue ... well it looks like the same issue to me. Let me know what I can contribute. We're running a single Windows server 2022 VM on a new Dell T350 and just experienced this issue this morning, early AM. A backup job finished at 3:39:52 and then the...
  14. B

    Ceph librbd vs krbd

    Hi All, We just experienced a bug that caused us to switch to krbd. Is there a good reason to switch back once the bug is resolved? It seems that krbd might be faster and I don't see any features that I'm giving up. best, James
  15. B

    Possible bug after upgrading to 7.2: VM freeze if backing up large disks

    Morning from PST all, Just a note to perhaps help someone else experiencing this frustrating issue. We experienced out 1.5 hour multi-VM backup (Ceph and Proxmox's built in backup, not PBS) suddenly changing to 12+ hours. On top of that the VMs with the largest disks (750GB) would drop in...
  16. B

    move disk on live VM causes cluster node to reboot

    Hi All, To keep with our timeline we're going to back up and restore from shared storage ... I'm not planning on troubleshooting this. Just an FYI to any posters trying to help. best, James
  17. B

    [SOLVED] Unable to properly remove node from cluster

    Just wanted to confirm that this is an overzealous error message ... same exact issue for our 5 node cluster (7.0) when shrinking it to 3 nodes.
  18. B

    move disk on live VM causes cluster node to reboot

    Hello all, I'm trying to figure out why using move disk on a live VM causes one of our cluster nodes to reboot. We are looking to migrate a live VM from CEPH to LVM storage. The reason being that this will then enable us to live migrate the VM to a non-CEPH attached node. When we do this...
  19. B

    SWAP usage

    Thanks avw, Another backup triggered last night and swap is holding at about 6.5 or 8GB used. There is a significantly larger amount of storage on the node that has the 6.5GB of swap usage vs other cluster nodes. My guess is that this is why SWAP is used ... based on what you are saying...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!