Recent content by Ashley

  1. A

    [SOLVED] CEPH Recovery Stopped

    Is your issue fully resolved now?
  2. A

    Prox' LXC exposes info like load from host in top/htop

    This is not a bug or an issue with Proxmox. LXC does not currently support an individual load value per a container. The load avg's shown in the container are node wide, it has been discussed on LXC multiple times but it is not yet found a simple and accurate way to report a container only load.
  3. A

    Windows 10 vm slow performance

    Is the VM itself sluggish or the Spice Console? Have you tried RDP directly to the Windows VM? Is the performance the same then or does the slowness go away?
  4. A

    [SOLVED] CEPH Recovery Stopped

    I would wait for all your data to move around and the repair to fully complete, then let a full set of deep scrubs rotate. While an OSD is in repair state it won't be deep scrubbed, so could just be a "false" positive.
  5. A

    [SOLVED] CEPH Recovery Stopped

    I would suggest marking the most full SSD OSD as out. This will allow I/O to still hit the SSD (read) but CEPH will start to move the data from this SSD to the other OSD's, just stopping the OSD's will make any data on these SSD's unavailable. Once the first SSD is completed you can continue...
  6. A

    [SOLVED] CEPH Recovery Stopped

    From your ceph osd df you have couple of OSD over the full limit, this will stop any further re balancing otherwise these may hit the 100% mark and stop I/O. Do you have a cache layer infront of the hdd pool using the 4 SSDs?
  7. A

    Ceph: Erasure coded pools planned?

    Any update on this? I am struggling to see anywhere in GUI to set this. Have tried manually to create a RBD setting the data pool and set in a KVM config but won't boot.
  8. A

    Proxmox + RAW EC RBD

    Yeah, I did however unless I missed something seemed they was talking about the general setup of it and not someone attempting to and their findings.
  9. A

    Proxmox + RAW EC RBD

    Hello, So have been testing out raw EC (no change) with CEPHFS and RBD in latest CEPH 12.x release. Works fine outside of Proxmox, however trying to use a RBD within Proxmox that is created on an EC pool fails. 1/ The RBD image does not show in Proxmox Storage content view however "rbd ls -p...
  10. A

    4th Node Nightly Rebooting

    No HA resources, just a 4 node cluster for management. However has been fine the past 2 night's, will continue to monitor. "journalctl -u pve-ha-lrm" shows nothing since the last reboot Sep 01 23:47:04 cn04 systemd[1]: Starting PVE Local HA Ressource Manager Daemon... Sep 01 23:47:04 cn04...
  11. A

    4th Node Nightly Rebooting

    Hello, Have had a 3 node cluster running perfectly fine for month's, recently added a 4th node to this cluster (Same hardware DL160 G9, and same configuration) For the last few nights every night around 11:40-50 the server will reboot it self, looking in the log's I am struggling to see...
  12. A

    after upgrade to PVE 5.0: unable to add osd

    What does the OSD log show in /var/log/ceph ?
  13. A

    Ceph Thin Provisioning

    You need to make sure within Proxmox discard is enabled on the disk in question. Then within the VM run a fstrim to clear the 0 space, this will take a while for 1TB to clear.
  14. A

    help full OSD need help

    Correct, and how did you remove pm05?
  15. A

    help full OSD need help

    Try unsetting noout If not what is full output of ceph health detail

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!