Search results

  1. M

    Remove and re-add cepf OSD

    OK, the GUI command should do zap. And if you were able to re-create the OSD from the GUI, that means the disks were zapped (first sectors zeroed). The command is 'ceph-volume lvm zap ...' I can see that your OSDs are assigned some class 'os'. Not sure where it came from, maybe you were playing...
  2. M

    Remove and re-add cepf OSD

    Hi @vaschthestampede Did you do zap on the destroyed OSD? I believe 'pveceph osd destroy' command does this, but if you use regular ceph command it needs to be done manually...
  3. M

    4 node cluster - very slow network/ceph speeds

    I know in certain mikrotik misconfigurations the traffic needs to be processed by CPU resulting in very poor performance... You can check if this is the case by running 'system resource monitor' on mikrotik, then your iperf3 and watch the CPU. In properly configured mikrotik there should be no...
  4. M

    Ceph: Import OSD to a new node

    Here's my notes for my moving OSD between the hosts that a did a couple of years ago, you might try the lvm activate command and see if that helps... On the old host - Stop OSD - Mark it OUT lvchange -a n $VG/$LV vgexport $VG Move the disk lsblk vgscan vgimport $VG vgchange -a y $VG...
  5. M

    Cheap Homelab NUC Cluster with Ceph.

    I run 3-node proxmox/ceph cluster for several years now. I use cheap Dell refurbished desktops with cheap consumer-grade NVMe and Solarflare 10GB adapters (though now my preference is Intel X520). Works fine regarding the cluster itself, live-migrating the VMs and doing host maintenance without...
  6. M

    Adding node to cluster of 3 broke the cluster

    I had bad experience when I tried HA configuration, so I stopped using it. If I am not mistaken if a node goes out of quorum just for a short while it should fence itself off, which means it needs to shutdown or something like that (and of course that brings down all the VMs). I think that...
  7. M

    nas share and wakeonlan

    Nope, I just remember that I used that thing a few years ago for a similar purpose. I believe I found some examples of the scripts in the Proxmox documentation, then I adapted those... No longer use that, so don't know the current status...
  8. M

    Proxmox host regularly loses network connection, needs full reboot to restore

    Can you try to use your 2.5Gb NIC just to rule out the 10Gb NIC/switch/driver issues? Also, if you have two nodes in the cluster I recommend that you assign one of the nodes (a 'primary' one) two votes, so that the cluster stays in quorum when the other node is down (it's better than having...
  9. M

    nas share and wakeonlan

    You can search on the forum and in the docs for the backup hook scripts and do wake-on line as the start of the backup job hook, and unmount at the end... But the backups to NAS are going to fill the space very quickly because there is no deduplication. You'd better setup a Proxmox Backup...
  10. M

    Proxmox Cluster Setup - What is the "best"?

    I believe min-size 2 applies for writes, so in 3-node cluster with 1 OSD each and 3/2 factor, if 2 OSD down you should still be able to read everything, but not able to write... So not a complete downtime, depending on the use case. But that only applies to the disks themselves going down. You...
  11. M

    Ceph SSD recommendations

    Hi @troycarpenter Would you mind to share what kind of SSD cache configuration you are using?
  12. M

    Understanding ceph performance and scaling

    There definitely should have been at least some activity during the faults (or host reboots). At the very least my test splunk server should receive a permanent inflow of data. It does looks like ceph has a way to know that the data on the OSD that stayed up and were written to need to be sent...
  13. M

    Understanding ceph performance and scaling

    I am using a pool with 2/1 replication for my lab VMs in the last 5 years, and the cluster survived a couple of SSD losses and a lot of node reboots. Maybe I am just lucky, but I grew to trust that ceph does OK even wtih the non-recommended configuration. If you know what you are doing and OK...
  14. M

    What is the best way to mount a CephFS inside LXC

    I believe it's host mount, then the mp parameter specifies where it should be mounted within the container, so it should be something like: mp0: /mnt/pve/cephfs/export1,mp=/mnt/export1,shared=1 mp1: /mnt/pve/cephfs/export2,mp=/mnt/export2,shared=1
  15. M

    Proxmox in ESXi. VMs have no access

    You would be better using a MAC learning switch on VMware rather than Promiscuous mode. You can check the following link for your options: https://williamlam.com/2023/05/refresher-on-nested-esxi-networking-requirements.html
  16. M

    Network optimization for ceph.

    Please don't forget that those 250MB/s throughput on HDDs are only for the large sequential operations. You can expect it to drop to 2.5MB/s or lower for random IOPS... Actually with truly random IO you should get only around 320KB/s throughput per disk (SATA drives doing 80 IOPS times 4K). I...
  17. M

    What is the best way to mount a CephFS inside LXC

    So, you mounted both cephfs filesystems to each of your proxmox nodes? After that you just modify the container config and add the mp0 and mp1 lines, each with the 'shared=1' option
  18. M

    Suggestions/Feedback on Proxmox Cluster with Ceph

    I built my cluster 6 years ago, so don't remember which resources I used at the time... I built a cluster in a nested lab several times, and it was very straightforward. If you have your single node environment I encourage you to test first in a virtual lab, you just need the three VMs with 3GB...
  19. M

    Suggestions/Feedback on Proxmox Cluster with Ceph

    Rest assured that it is entirely possible to run a Proxmox cluster on 3 nodes with a single NVMe drive. I actually run a similar configuration for several years. Note that you would basically end up with the ceph storage equivalent to the size of your single NVME drive, as the best practice is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!