Looking for feedback from experienced Proxmox/Ceph users, I have a 3 node cluster, each node has a dedicated SSD (Samsung 870 EVO 500GB SATA) for Ceph, there is a separate SSD for Proxmox Boot/OS, the Ceph SSD is rated for 300 TBW. This is a brand new home-lab type of cluster, at this point, I only have 2 VMs (Openwrt and AdGuard Home) with very light disk usage, most of their work is in RAM. Both VMs have their respective disks in Ceph, replicated across all 3 nodes.
With the whole cluster almost idle, just with the 2 VMs running with very light usage, using iotop -ao I have measured the write usage, and aggregating only all the Ceph processes, I am getting to around 500MB/hour just for Ceph write usage, I have to asume is mostly logs and some replication across nodes, because I have not migrated the VMs across nodes, or done anything else during the measurement.
At this writing rate, its around 4.5TB/year idle, once I will start adding more VMs and performing migrations and other activities, this could easily jump to 20-30T/year, does it sound reasonable to you? I am worried that the consumer SSDs (I know, I should have purchase comercial grade, but its to late now) will not last very long at this rate, thanks for the feedback.
With the whole cluster almost idle, just with the 2 VMs running with very light usage, using iotop -ao I have measured the write usage, and aggregating only all the Ceph processes, I am getting to around 500MB/hour just for Ceph write usage, I have to asume is mostly logs and some replication across nodes, because I have not migrated the VMs across nodes, or done anything else during the measurement.
At this writing rate, its around 4.5TB/year idle, once I will start adding more VMs and performing migrations and other activities, this could easily jump to 20-30T/year, does it sound reasonable to you? I am worried that the consumer SSDs (I know, I should have purchase comercial grade, but its to late now) will not last very long at this rate, thanks for the feedback.