Search results

  1. P

    Ceph: Balancing disk space unequally!?!?!?!

    Thought it was balancing automatically. This is the output: { "active": true, "last_optimize_duration": "0:00:00.001408", "last_optimize_started": "Mon Nov 27 20:02:25 2023", "mode": "upmap", "no_optimization_needed": true, "optimize_result": "Unable to find further...
  2. P

    High iodelay on one of three identical nodes

    True. I guess there could have been a VM that uses a local drive - but there wasn't. Yes, two separate rules for each pool. It's a Samsung PM863a with 3.84TB capacity.
  3. P

    Ceph: Balancing disk space unequally!?!?!?!

    Yes, the "r" was from the "replicated_rule" whereas the "c" was from "ceph_hdd" and "ceph_sdd" - my own replicated rules for two pools. I did change the default replicated rule for .mgr to "ceph_ssd" as suggested. There was a very brief spike of activity in Ceph but overall nothing has changed...
  4. P

    Mailcow + PMG make sense?

    Hi, I am running a Mailcow instance in a VM on my cluster and it is going great. But, me being me and my cluster being a home lab and no commercial operation, I am always trying out new things, playing around. Now I am wondering whether there would be any benefit in setting up a Proxmox Mail...
  5. P

    High iodelay on one of three identical nodes

    Hi, I have a three node PVE cluster with identical nodes. Each node has an SSD that is part of a Ceph pool (I know, I should have more SSDs in the pool). And each node also has two HDDs that are part of another Ceph pool. I replaced the three enterprise grade SSDs with three other, larger...
  6. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Hmm. So I just migrated / shut down the VMs on the node with high iodelay. Turns out, even with zero VMs running, the iodelay remains (almost) the same. Probably has nothing to do with the VMs then. I'll open a new thread for this topic. But I'd still be interested how to interpret the output...
  7. P

    Cluster getting really ssssllllloooooowwwwww :-(((((((((((((((((((((((((

    Having a strange situation again: This time, I replaced the three enterprise grade SSDs with three other, larger enterprise grade SSDs. Two nodes show low iodelays (2%) while one node shows very high iodelays (25%). The three nodes are basically identical (make, model, cpu, memory) and also the...
  8. P

    [SOLVED] PBS Metrics?

    Hi, I just installed a Grafana Influx stack and enabled PVE metrics logging to it. Now I'm trying to find similar functionality in PBS -- is there? Thanks!
  9. P

    Proxmox how to encrypt VM backup

    Thanks. Will try the safe way... Edit: Typo
  10. P

    Proxmox how to encrypt VM backup

    Thanks but I'm afraid my question was not phrased clearly enough: Oguz had sad that the checkbox (to activate encryption) could be checked anytime. When I asked how to check that, I did not mean how to check whether encryption was active or not but rather how to check that checkbox (in order...
  11. P

    Proxmox how to encrypt VM backup

    How do I check this for existing storage? There is no "Encryption" tab... Thanks!
  12. P

    Why are my synced encrypted backups not encrypted???????????

    Yes, exactly. Did I miss a setting somewhere to keep the backups encrypted when synced?
  13. P

    Why are my synced encrypted backups not encrypted???????????

    So I have a local PBS running, backing up my PVE cluster. Backups are encrypted. And I have another PBS running offsite, syncing the backups from my local PBS for safekeeping. The same backups are not encrypted at the other PBS. Why? What do I need to do to keep my encrypted backups the way I...
  14. P

    Tape clean: TASK ERROR: unload drive failed - Not Ready, Additional sense: Cleaning cartridge installed

    Correct, I did not see any confirmation of succesful cleaning Fujitsu Eternus LT S2 and IBM ULT3580-HH7
  15. P

    Tape clean: TASK ERROR: unload drive failed - Not Ready, Additional sense: Cleaning cartridge installed

    Hi, I was able to purchase a small tape library for my home lab to use with my PBS. So far it has been working flawlessly. But today, I thought it a good idea to clean the drive. So I imported a cleaning cartridge, unloaded the current tape from the drive and clicked on Clean Drive in the PBS...
  16. P

    Ceph: Balancing disk space unequally!?!?!?!

    Done Confirm it's on. Column "Optimal PG Num" remains empty The 12TB OSDs are all the exact same make and model and the 4TB OSDs are too. The allocation remains unchanged (i.e. uneven on two of the three nodes). What else could I try? Thanks!
  17. P

    Ceph: Balancing disk space unequally!?!?!?!

    Name │ Size │ Min Size │ PG Num │ min. PG Num │ Optimal PG Num │ PG Autoscale Mode │ PG Autoscale Target Size │ PG Autoscale Target Ratio │ C...
  18. P

    Ceph: Balancing disk space unequally!?!?!?!

    on the (more or less) balanced node there are 226 and 63 pgs on the OSDs while on the unbalanced nodes there are 218 vs 71 and 225 vs 64, respectively. There doesn't seem to be rhyme or reason behind it.
  19. P

    Ceph: Balancing disk space unequally!?!?!?!

    Unfortunately, no. That's already after rebalancing...
  20. P

    Ceph: Balancing disk space unequally!?!?!?!

    Not sure - I have what comes as standard in PVE. If you are referring to a separate piece of software, then I don't have that installed. In any case, I can see the Crush Map in the PVE GUI. It shows the same weights I reported above.