Search results

  1. B

    Demystifying Load Averages

    I have, indeed. That's been very informative. Just wondering about any additional input specific to Prox users might be able to offer in terms of this metric's usefulness.
  2. B

    Demystifying Load Averages

    Hi All, I am hoping to get some insight into why my cluster load averages are fairly high despite low CPU Loads & I/O Wait. Basic Overview: 3x Dell R740xD; Dual Xeon Gold 6138 (2x 20/40 = 80c), 10x32GB 2666MHz DDR4 Storage Type: Ceph (hyper converged) Here's a snapshot from my Grafana...
  3. B

    [SOLVED] Determining node status in shell

    Thanks Dominik Would you know of another method of pulling the 1/0 values from the API? I can do it with this bit of code - pvesh get /cluster/status --human-readable --noborder | awk '(NR!=1)'| awk '{printf " %s\n", $7}'|sort but I'd prefer a one-liner that will echo 1 or 0 of an individual...
  4. B

    [SOLVED] Determining node status in shell

    Other than checking ping, is there a another more reliable method using Prox API to determine if a specific node is alive or dead? I was hoping that pvecm had an option to poll specific nodes with pvecm status, but it only provides the overview. I am writing an if-fi that will determine which...
  5. B

    VM shutdown on host shutdown.

    @fabian - Finally getting a chance to try out your suggestion... pvesh create /nodes/{host}/stopall works except for HA-enabled VMs. Is there an ALL value that can be passed to ha-manager set <sid> -state started/stopped ? If not, I can at least manually stop each HA member in my script.
  6. B

    how to install ceph influx module?

    thanks for the tip but... Still no dice - back to the original error: MGR_MODULE_DEPENDENCY: Module 'influx' has failed dependency: influxdb python module not found Very odd. At least I have the dashboard up and running. I'd prefer to have it all in Grafana but I'll take what I can get!
  7. B

    how to install ceph influx module?

    Still no dice for me - I've installed PIP & retrieved the influxdb module on all three of my Prox servers. Restarted managers multiple times. I tried '-force' but it killed all three of my managers. They wouldn't come back up. So I trashed them and created three new. I did manage to get the...
  8. B

    how to install ceph influx module?

    Have you found a solution to this? I am interested in pulling my Ceph data into Grafana via InfluxDB so that I can monitor OSD status. I currently have this up and running (minus Ceph) with the Metric Server enabled in Prox. But I have hit a wall with the "ceph mgr module enable influx" route...
  9. B

    [SOLVED] Import VMWare Centos 7 machine

    Just successfully used this pro-tip 4 years later :D
  10. B

    Import from VMWare to Ceph

    Well, that's certainly an easier method ! Thanks for sharing, I'll give that a try.
  11. B

    Import from VMWare to Ceph

    Old post but very relevant information. I'd like to expand on this by adding in an NFS share to the process. I have restored my VMware images backed up to Vembu to an NFS share on the network. Vembu allows you to export RAW, so that saves the step of having to convert - but the above command...
  12. B

    RSTP Loop with Secondary Bonding

    I currently have a 10GbE RSTP Loop running on my three node cluster, as outlined here https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server. And it is working well. However I want to add a secondary ring to my Ceph Public network. I have 2 spare 1GbE ports available on each node. Is...
  13. B

    VM shutdown on host shutdown.

    Thanks @fabian - that's exactly what I was hoping to find. Much appreciated
  14. B

    VM shutdown on host shutdown.

    Thanks so much for taking the time to do this. I'll definitely give it a try. I spent some time digging around Prox and found a handful of files related to 'pve-manager' but no luck finding the exact scripting. It's called as a service but it's actually not a daemon, so I was hoping to find the...
  15. B

    VM shutdown on host shutdown.

    @fabian is this still the preferred method to shutdown guests without shutting down host? # service pve-manager stop I am trying to coordinate safe shutdown of Ceph but I need it to shutdown 1) guests 2) osds and then 3) host. Unless I am mistaken and there is a simple way to gracefully...
  16. B

    Shutdown of the Hyper-Converged Cluster (CEPH)

    Sure! But it might be a little bit of a wait. I'm only playing with a testbench at the moment. UPS is holding our new servers hostage in customs at the moment. And I want to make certain everything is running perfectly before I share publicly. But I will be happy to once I get it all working!
  17. B

    Shutdown of the Hyper-Converged Cluster (CEPH)

    I'm currently configuring NUT safe-shutdown scripts on Prox cluster w/ CEPH & HA. I am assuming this is the recommended procedure for graceful shutdown?
  18. B

    apcupsd graceful guest shutdown on HA/CEPH cluster

    Update. I am no longer using APCUPSD - I'm using NUT instead. I know that Prox is smart enough that a fsd shutdown command will shutdown all guests before the hosts. But does this apply to a Prox cluster running CEPH or does the shutdown script in NUT need to 'noout' and 'norebalance' every...
  19. B

    apcupsd graceful guest shutdown on HA/CEPH cluster

    Hi all. I am soon deploying a three node cluster with HA guests & CEPH storage. I will be setting up apcupsd to initiate graceful shutdown, but I am unsure of the appropriate shutdown scripting when HA & CEPH are in play. I have a few questions and would greatly appreciate any tips: 1. What...
  20. B

    PVE w/CEPH - TUNED profiles

    Hello. I'm wondering if anyone can share their experience using tuned-agm profiles to improve performance of a PVE/CEPH HA cluster. There is a "network-latency" profile that seems ideal for CEPH. But then there are also the "virtual-guest" and "virtual-host" profiles that RHEL recommends. If...