Recent content by Binary Bandit

  1. B

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    @Lokytech, thank you for this thread. We're starting to look at V9 and a server refresh. It's helpful to know that this still works. We're planning our next server refresh and are investigating the use of iDRAC with SNMP. In the past, we've done this with customers who use (gasp) VMWare...
  2. B

    pointers/advice for server refresh

    Hello everyone, It's time for a hardware refresh for our three-node cluster. I appreciate any advice, links to relevant threads about hardware issues, and so on. Here's what we're considering x 3: Latest Proxmox 8.4 Dell PowerEdge R6525 10x2.5" NVMe No TPM (I don't see a reason for this?) 1...
  3. B

    Should I Enable Hardware Offloading on ConnectX-6 Lx NICs for a Ceph Cluster on Proxmox VE?

    Thanks for the post devaux ... I'm following along as I am thinking about using these in a new build
  4. B

    How to locate most efficient CPU vs your host CPU

    That was my thought as well. We'll be using "Broadwell-noTSX-IBRS" for now. There are big performance gains vs. the "KVM 64" CPU, and I can drop a new cluster node without having to match the CPU down to the microcode exactly.
  5. B

    How to locate most efficient CPU vs your host CPU

    @bbgeek17, thank you. What you suggest is next on my list to understand. I stopped after seeing "If you care about live migration and security, and you have only Intel CPUs or only AMD CPUs, choose the lowest generation CPU model of your cluster." in the documentation here. Is this as simple as...
  6. B

    How to locate most efficient CPU vs your host CPU

    Hi All, I want to use the newest processor type for our VMs. After some digging around on the Internet, reading this, and figuring out that our CPU ... an E5-2667 v4 ... is from the Broadwell family ... and running "kvm -cpu help" for the console, I see "x86 Broadwell-v4 Intel Core...
  7. B

    No Networking After Upgrade to 8.2

    When I upgraded my PVE 7 to 8 cluster, I read and re-read the instructions over several days. I've never felt the need to do this for any sub-version (7.x, for example) upgrade. I'll now be reading upgrade notes. I'm grateful to have found this thread, but I'm definitely frustrated.
  8. B

    Ceph PGs reported too high when they are exactly what is requested

    Hello everyone, This is just a bit of encouragement for first-time Ceph upgraders on PVE7. About a week ago, I upgraded our 3-node cluster per the official instructions here. It went smoothly with no issues. Just be sure to read everything carefully. Oh, and the bug described here is, of...
  9. B

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Hi All, For anyone who installed OMSA on PVE7, as I did here, this is just a note to let you know that an in-place upgrade seems to work. Our hyper-converged Ceph cluster has been running well for 24 hours. I'll post back I there are issues. best, James
  10. B

    [TUTORIAL] Dell Openmanage on Proxmox 6.x

    Love it! My thanks to all for keeping this alive. Proxmox 7 support is nearing its end, so I'm looking into a direct upgrade to 8. I see two choices. 1) Remove OMSA 10.1 (see how I did this install here.) / move to SNMP monitoring IDRAC or learn checkmk. 2) Update to OMSA 10.3 via the...
  11. B

    Ceph PGs reported too high when they are exactly what is requested

    Morning from PST Shanreich, Thank you for the response. We're running Ceph 16 / Pacific. I posted all of our versions below. Looks like David / #7 on the bug URL (thank you for that) is reporting this issue with the exact version we are using. I've spent several hours looking through...
  12. B

    Ceph PGs reported too high when they are exactly what is requested

    Hi All, I just patched our Proxmox 7 cluster to the latest version. After this "ceph health detail" reports: HEALTH_WARN 2 pools have too many placement groups [WRN] POOL_TOO_MANY_PGS: 2 pools have too many placement groups Pool device_health_metrics has 1 placement groups, should have 1...
  13. B

    understanding ZFS RAM usage

    Thanks leesteken, that checks out. Here are updated numbers for column B: For those who find this thread, I obtained the updated Exchange and DC VM numbers by opening Windows task manager on each and adding the RAM "In use" to "Cached". Now I can see that the numbers line up. The remaining...