Search results

  1. P

    Tape clean: TASK ERROR: unload drive failed - Not Ready, Additional sense: Cleaning cartridge installed

    Correct, I did not see any confirmation of succesful cleaning Fujitsu Eternus LT S2 and IBM ULT3580-HH7
  2. P

    Tape clean: TASK ERROR: unload drive failed - Not Ready, Additional sense: Cleaning cartridge installed

    Hi, I was able to purchase a small tape library for my home lab to use with my PBS. So far it has been working flawlessly. But today, I thought it a good idea to clean the drive. So I imported a cleaning cartridge, unloaded the current tape from the drive and clicked on Clean Drive in the PBS...
  3. P

    Ceph: Balancing disk space unequally!?!?!?!

    Done Confirm it's on. Column "Optimal PG Num" remains empty The 12TB OSDs are all the exact same make and model and the 4TB OSDs are too. The allocation remains unchanged (i.e. uneven on two of the three nodes). What else could I try? Thanks!
  4. P

    Ceph: Balancing disk space unequally!?!?!?!

    Name │ Size │ Min Size │ PG Num │ min. PG Num │ Optimal PG Num │ PG Autoscale Mode │ PG Autoscale Target Size │ PG Autoscale Target Ratio │ C...
  5. P

    Ceph: Balancing disk space unequally!?!?!?!

    on the (more or less) balanced node there are 226 and 63 pgs on the OSDs while on the unbalanced nodes there are 218 vs 71 and 225 vs 64, respectively. There doesn't seem to be rhyme or reason behind it.
  6. P

    Ceph: Balancing disk space unequally!?!?!?!

    Unfortunately, no. That's already after rebalancing...
  7. P

    Ceph: Balancing disk space unequally!?!?!?!

    Not sure - I have what comes as standard in PVE. If you are referring to a separate piece of software, then I don't have that installed. In any case, I can see the Crush Map in the PVE GUI. It shows the same weights I reported above.
  8. P

    Ceph: Balancing disk space unequally!?!?!?!

    Hi, I have a three node PVE cluster in which each node is also a Ceph node. Each Ceph node used to have one identical HDD and the pool was getting full. Therefore, and because one is supposed to have more OSDs anyway, I added one additional identical HDD OSD per node. Ceph rebalanced between...
  9. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    Hi Jeremy, Sorry, it would have made sense to report this to the TKL forums as well after resolving it. I shall try and keep that in mind, should I ever have another TKL related issue. Great to hear that dbus will be included in v18. Cheers
  10. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    Happy to confirm that installing dbus on two of my TKL appliances did the trick (I did not also install systemd, that seems to be installed already). The only other comment I have is that it did not work right after the installation of dbus, I needed to reboot the VM once. Thanks for your...
  11. P

    Even no of PVE nodes?

    Yeah, I think you are right. But then it doesn't work the way it needs to. I want one of the three 24/7 nodes to be allowed to fail (or go down for maintenance) and the cluster continue to work. Maybe UdoB's idea could be the solution. Have to think that through.
  12. P

    Even no of PVE nodes?

    Good point. So my plan was to have the fourth node join CEPH but not host any OSDs. If I also don't set ip up as Monitor or Manager? Would that keep it neutral in the quorum count?
  13. P

    Even no of PVE nodes?

    One more try: Now, I have three nodes with three votes and need two for quorum. If I add a fourth server, I also add the quorum device. This would give me five "nodes" (the quorum device counts) with five votes out of which I need three for quorum. So when node no. 4 is offline (which is most...
  14. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    pveversion -v proxmox-ve: 7.4-1 (running kernel: 5.15.108-1-pve) pve-manager: 7.4-16 (running version: 7.4-16/0f39f621) pve-kernel-5.15: 7.4-4 pve-kernel-5.15.108-1-pve: 5.15.108-1 pve-kernel-5.15.107-2-pve: 5.15.107-2 pve-kernel-5.15.107-1-pve: 5.15.107-1 pve-kernel-5.15.102-1-pve: 5.15.102-1...
  15. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    No, nothing further. When I ping the guest agent, I don't get any response (no time out or anything). (I can ping the VM with a normal ping, though).
  16. P

    [SOLVED] Anyone else having issues shutting down Turnkey Linux appliances from PVE host?

    Qemu-guest-agent is up and running - I can see the VM's IP address in the GUI. Inside the VM, journalctl shows me: qemu-ga[xxx]: info: guest-shutdown called, mode: (null)
  17. P

    Even no of PVE nodes?

    Okay, new try: Now, I have three nodes with three votes and need two for quorum. If I add a fourth server, I also add the quorum device. This would give me four nodes (the quorum device does not count here) with four votes out of which I need three for quorum. So when node no. 4 is offline...
  18. P

    Ceph node without OSDs?

    Thank you, noted - we are discussing this in a parallel thread at this very time (I just wanted to have two threads for two distinct questions)...
  19. P

    Even no of PVE nodes?

    And how would I implement that? Now, I have three nodes and two form a quorum. When I add the fourth server, I would also add the quorum device, right? Then this would give me five "nodes" out of which I need four for quorum. But node no. 4 will be offline most of the time. If one of the...