I have proxmox installed on a PCI-e NVM SSD. This SSD is not an enterprise class one, so it doesn't have a high endurance TBW rating.
Now, my proxmox+ceph setup is running for only 4 days and when I do "iostat -k" I can see that there's already about 280Gb written to my PCI-e SSD. Which is not good for these kind of SSD's...
So after reading and searching online I read that Ceph monitors write to /var/lib/ceph/mon/ceph-0/*.
I can see that's it's constantly writing there, so I think this is going to be a problem because it's going to wear out my PCI-e SSD pretty fast. The idea was to use this SSD only for proxmox which should be doing a lot of writing at all..
So, my question. Should I (somehow?) modify the ceph monitors to write to an enterprise SSD instead? I can add a partition to one of the journal SSD's and have it written there. Those kind of SSD's are enterprise SSD's with a very high endurance TBW rating.
Please advice.
Also: if I need to modify the location. Can I do that without interrupting Ceph? I'm already running many production VM's on this setup now...
Now, my proxmox+ceph setup is running for only 4 days and when I do "iostat -k" I can see that there's already about 280Gb written to my PCI-e SSD. Which is not good for these kind of SSD's...
So after reading and searching online I read that Ceph monitors write to /var/lib/ceph/mon/ceph-0/*.
I can see that's it's constantly writing there, so I think this is going to be a problem because it's going to wear out my PCI-e SSD pretty fast. The idea was to use this SSD only for proxmox which should be doing a lot of writing at all..
So, my question. Should I (somehow?) modify the ceph monitors to write to an enterprise SSD instead? I can add a partition to one of the journal SSD's and have it written there. Those kind of SSD's are enterprise SSD's with a very high endurance TBW rating.
Please advice.
Also: if I need to modify the location. Can I do that without interrupting Ceph? I'm already running many production VM's on this setup now...