Search results

  1. M

    CEPH REEF 16 pgs not deep-scrubbed in time and pgs not scrubbed in time

    Not a huge Ceph expert, just a long time user in my home lab cluster, you just pick up various small things as you go... I setup my cluster when there were no autoscale functionality, there used to be different calculators, that's where I know that it is supposed to be a few dozens pgs per OSD...
  2. M

    CEPH REEF 16 pgs not deep-scrubbed in time and pgs not scrubbed in time

    Maybe it just not able to finish all the scrubbing operations for all your pgs in two weeks? I had it when scrubbing stopped due to some OSD issues, and it took maybe a week or more after the issue fixed for the pgs to get all scrubbed... Also, to me 97pgs for 64 osd seem too low... They say...
  3. M

    Ceph and TBW with consumer grade SSD

    Yep. I run my cluster in this configuration for more than 7 years and still waiting for something to happen... It's a lab anyway, I actually wanted to see what could go wrong as a learning exercise, but it just works...
  4. M

    Ceph and TBW with consumer grade SSD

    I have a 3-node proxmox/ceph cluster with consumer grade NVME SSD and it works fine and I use a dozen or so of different VMs. Just checked at my 1TB Intel 660p and Crucial P1 that I started to use in 2019, one of them has 108 TB written, the other 126TB. Basically that is less that 2/3 of their...
  5. M

    RE-IP Proxmox VE and Ceph

    @complexplaster27 For each OSD you configure the address to use, something like below (I use different cluster and public subnets, but you can just use the same address for both). Then you restart that OSD (osd.8) to start using the new IP address. Check that everything works, the cluster still...
  6. M

    Ceph Health Error & pgs inconsistent

    I think you should give it some time for the deep scrubbing to finish and fix the inconsistencies.
  7. M

    RE-IP Proxmox VE and Ceph

    @complexplaster27 No, it's not related to proxmox network. When you restart an OSD configured to use the new subnet you will need it to be able to communicate to the OSDs that are still on the old subnet. Same with the MONs, when you re-create the new one still need to communicate to the old ones...
  8. M

    RE-IP Proxmox VE and Ceph

    Hi @complexplaster27 I responded to similar questions on the forum. I believe you will have to ensure there is a routing between your old and new subnets for the duration of the transition. You are correct, you just modify those parameters appropriately as you go (I believe there is also...
  9. M

    Guidance: Installing second network and re-configuring Ceph

    I would say the very first step would be to configure your new network and ensure hosts can talk to each other on this network. I believe that all can be done from GUI and not related to ceph at all. You will also need to enable routing between the old subnet and the new subnet. During the...
  10. M

    Remove and re-add cepf OSD

    OK, the GUI command should do zap. And if you were able to re-create the OSD from the GUI, that means the disks were zapped (first sectors zeroed). The command is 'ceph-volume lvm zap ...' I can see that your OSDs are assigned some class 'os'. Not sure where it came from, maybe you were playing...
  11. M

    Remove and re-add cepf OSD

    Hi @vaschthestampede Did you do zap on the destroyed OSD? I believe 'pveceph osd destroy' command does this, but if you use regular ceph command it needs to be done manually...
  12. M

    4 node cluster - very slow network/ceph speeds

    I know in certain mikrotik misconfigurations the traffic needs to be processed by CPU resulting in very poor performance... You can check if this is the case by running 'system resource monitor' on mikrotik, then your iperf3 and watch the CPU. In properly configured mikrotik there should be no...
  13. M

    Ceph: Import OSD to a new node

    Here's my notes for my moving OSD between the hosts that a did a couple of years ago, you might try the lvm activate command and see if that helps... On the old host - Stop OSD - Mark it OUT lvchange -a n $VG/$LV vgexport $VG Move the disk lsblk vgscan vgimport $VG vgchange -a y $VG...
  14. M

    Cheap Homelab NUC Cluster with Ceph.

    I run 3-node proxmox/ceph cluster for several years now. I use cheap Dell refurbished desktops with cheap consumer-grade NVMe and Solarflare 10GB adapters (though now my preference is Intel X520). Works fine regarding the cluster itself, live-migrating the VMs and doing host maintenance without...
  15. M

    Adding node to cluster of 3 broke the cluster

    I had bad experience when I tried HA configuration, so I stopped using it. If I am not mistaken if a node goes out of quorum just for a short while it should fence itself off, which means it needs to shutdown or something like that (and of course that brings down all the VMs). I think that...
  16. M

    nas share and wakeonlan

    Nope, I just remember that I used that thing a few years ago for a similar purpose. I believe I found some examples of the scripts in the Proxmox documentation, then I adapted those... No longer use that, so don't know the current status...
  17. M

    Proxmox host regularly loses network connection, needs full reboot to restore

    Can you try to use your 2.5Gb NIC just to rule out the 10Gb NIC/switch/driver issues? Also, if you have two nodes in the cluster I recommend that you assign one of the nodes (a 'primary' one) two votes, so that the cluster stays in quorum when the other node is down (it's better than having...
  18. M

    nas share and wakeonlan

    You can search on the forum and in the docs for the backup hook scripts and do wake-on line as the start of the backup job hook, and unmount at the end... But the backups to NAS are going to fill the space very quickly because there is no deduplication. You'd better setup a Proxmox Backup...
  19. M

    Proxmox Cluster Setup - What is the "best"?

    I believe min-size 2 applies for writes, so in 3-node cluster with 1 OSD each and 3/2 factor, if 2 OSD down you should still be able to read everything, but not able to write... So not a complete downtime, depending on the use case. But that only applies to the disks themselves going down. You...
  20. M

    Ceph SSD recommendations

    Hi @troycarpenter Would you mind to share what kind of SSD cache configuration you are using?

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!