Search results

  1. B

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    @Lokytech, thank you for this thread. We're starting to look at V9 and a server refresh. It's helpful to know that this still works. We're planning our next server refresh and are investigating the use of iDRAC with SNMP. In the past, we've done this with customers who use (gasp) VMWare...
  2. B

    pointers/advice for server refresh

    Hello everyone, It's time for a hardware refresh for our three-node cluster. I appreciate any advice, links to relevant threads about hardware issues, and so on. Here's what we're considering x 3: Latest Proxmox 8.4 Dell PowerEdge R6525 10x2.5" NVMe No TPM (I don't see a reason for this?) 1...
  3. B

    Should I Enable Hardware Offloading on ConnectX-6 Lx NICs for a Ceph Cluster on Proxmox VE?

    Thanks for the post devaux ... I'm following along as I am thinking about using these in a new build
  4. B

    How to locate most efficient CPU vs your host CPU

    That was my thought as well. We'll be using "Broadwell-noTSX-IBRS" for now. There are big performance gains vs. the "KVM 64" CPU, and I can drop a new cluster node without having to match the CPU down to the microcode exactly.
  5. B

    How to locate most efficient CPU vs your host CPU

    @bbgeek17, thank you. What you suggest is next on my list to understand. I stopped after seeing "If you care about live migration and security, and you have only Intel CPUs or only AMD CPUs, choose the lowest generation CPU model of your cluster." in the documentation here. Is this as simple as...
  6. B

    How to locate most efficient CPU vs your host CPU

    Hi All, I want to use the newest processor type for our VMs. After some digging around on the Internet, reading this, and figuring out that our CPU ... an E5-2667 v4 ... is from the Broadwell family ... and running "kvm -cpu help" for the console, I see "x86 Broadwell-v4 Intel Core...
  7. B

    No Networking After Upgrade to 8.2

    When I upgraded my PVE 7 to 8 cluster, I read and re-read the instructions over several days. I've never felt the need to do this for any sub-version (7.x, for example) upgrade. I'll now be reading upgrade notes. I'm grateful to have found this thread, but I'm definitely frustrated.
  8. B

    Ceph PGs reported too high when they are exactly what is requested

    Hello everyone, This is just a bit of encouragement for first-time Ceph upgraders on PVE7. About a week ago, I upgraded our 3-node cluster per the official instructions here. It went smoothly with no issues. Just be sure to read everything carefully. Oh, and the bug described here is, of...
  9. B

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Hi All, For anyone who installed OMSA on PVE7, as I did here, this is just a note to let you know that an in-place upgrade seems to work. Our hyper-converged Ceph cluster has been running well for 24 hours. I'll post back I there are issues. best, James
  10. B

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Love it! My thanks to all for keeping this alive. Proxmox 7 support is nearing its end, so I'm looking into a direct upgrade to 8. I see two choices. 1) Remove OMSA 10.1 (see how I did this install here.) / move to SNMP monitoring IDRAC or learn checkmk. 2) Update to OMSA 10.3 via the...
  11. B

    Ceph PGs reported too high when they are exactly what is requested

    Morning from PST Shanreich, Thank you for the response. We're running Ceph 16 / Pacific. I posted all of our versions below. Looks like David / #7 on the bug URL (thank you for that) is reporting this issue with the exact version we are using. I've spent several hours looking through...
  12. B

    Ceph PGs reported too high when they are exactly what is requested

    Hi All, I just patched our Proxmox 7 cluster to the latest version. After this "ceph health detail" reports: HEALTH_WARN 2 pools have too many placement groups [WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups Pool device_health_metrics has 1 placement groups, should have 1...
  13. B

    understanding ZFS RAM usage

    Thanks leesteken, that checks out. Here are updated numbers for column B: For those who find this thread, I obtained the updated Exchange and DC VM numbers by opening Windows task manager on each and adding the RAM "In use" to "Cached". Now I can see that the numbers line up. The remaining...
  14. B

    understanding ZFS RAM usage

    Hi All, Is there a command or two that illustrates where RAM is being consumed on our Proxmox systems that are using ZFS? For example, here is the RAM usage on a new system: Column B is my math showing expectations with my current understanding of what is using RAM ... This server has 128MB...
  15. B

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Interesting posts. Mine is crashing within a couple of hours of a 3:15AM backup ... every time. I'll likely move to Win 2019 for this install but will watch this thread.
  16. B

    VM shutdown, KVM: entry failed, hardware error 0x80000021

    Throwing my hat in the ring for this issue ... well it looks like the same issue to me. Let me know what I can contribute. We're running a single Windows server 2022 VM on a new Dell T350 and just experienced this issue this morning, early AM. A backup job finished at 3:39:52 and then the...
  17. B

    Ceph librbd vs krbd

    Hi All, We just experienced a bug that caused us to switch to krbd. Is there a good reason to switch back once the bug is resolved? It seems that krbd might be faster and I don't see any features that I'm giving up. best, James
  18. B

    Possible bug after upgrading to 7.2: VM freeze if backing up large disks

    Morning from PST all, Just a note to perhaps help someone else experiencing this frustrating issue. We experienced out 1.5 hour multi-VM backup (Ceph and Proxmox's built in backup, not PBS) suddenly changing to 12+ hours. On top of that the VMs with the largest disks (750GB) would drop in...