How to move proxmox ceph mon to new partition or drive

GoZippy

Member
Nov 27, 2020
112
2
23
45
www.gozippy.com
Anyone have any specific instructions on how to move ceph-mon directory to new partition post-proxmox install in default?

I have 80GB SSD for OS and PM and it seems to fill up with basic OS and PM updates and causes ceph to go into warn/critical shutdown even with space left on the drive due to some underlying "safety" config defaults that shut down ceph mon and pool when low on disk space below a certain percentage of the partition it lives on... irregardless of the actual space in MB or GB left... seems no bueno... anyhow I have a pile of extra 80GB SSD's from server pulls and would like to just add another drive to the failing nodes and migrate the ceph only services to that new ssd/hdd partition since the main OS one is running low.

Seems that anything - like logs and other services on pve root partition - can fill up the drive to a certain level - especially with distribution updates and PM update packages that wreck my pool each time...

CEPH man suggested 60GB dedicated mon partition per mon per node...

so Q: What is best way to add new drive, and dedicate it to ceph?
 
You could start the server with a "rescue disk" like grml and then move all files below /var/lib/ceph to a filesystem on the new disk and then change /etc/fstab to mount that filesystem on /var/lib/ceph.
When this offline operation is not feasable you need to stop all processes with open files below /var/lib/ceph and then copy its contents to the new filesystem and mount it there.
 
  • Like
Reactions: GoZippy
You could start the server with a "rescue disk" like grml and then move all files below /var/lib/ceph to a filesystem on the new disk and then change /etc/fstab to mount that filesystem on /var/lib/ceph.
When this offline operation is not feasable you need to stop all processes with open files below /var/lib/ceph and then copy its contents to the new filesystem and mount it there.
Ok so what's best way to stop all services mounted there and validate all are stopped? I can install new 80gb ssd just for ceph mon.. just need to know best practices for format type, best way to copy and then make sure ceph and everything pm knows where it lives correctly... suppose whatever I mount it as... anyhow just wondering best practices and steps to move ceph to new drive...
 
I guess I will do as suggested and boot to usb and move everything - was hoping for a cli way to just kill all processes and copy then unmount the /var/lib/ceph which I am still not sure is the problem - but anyhow - and then change fstab to use new drive - thinking I should also move swap over to a lvm on the new ssd too.. thoughts? Trying to free up pve root space - even though it shows only 58% full - last time I did OS update it killed my ceph cluster then would not boot.. I had to go in and delete a bunch of logs and purge old package files from apt then I had enough to finish up and get pm running again - but ceph pool died and refuse to restart and shows critical err low disk space 1% on ceph monitor... not sure... anyhow - I installed another 80GB SSD to each monitor node - created new primary partition and formatted to ext4 on the entire 80gb ssd.

Thought I could go ahead and copy now but need to make sure no locks and I get everything moved correctly.

Is there any specific concerns with Proxmox configs or settings so it knows where the monitor lives?

What do you think I should do with this setup?

1648695572096.png
1648695649944.png
1648695760554.png

So from what I see /var/lib/ceph is tmpfs source - so... wondering what the correct steps are to make sure I don't screw up proxmox too...

1648696068357.png

Q2 - should I add logical volumes on SSD2 (sdc)
was wanting to add swap partition on ssd2 (sdc) and remove from sda as well as move all /var/lib/ceph over to new logical partition on sdc

should work - right?
 
https://askubuntu.com/questions/1040611/will-multiple-swap-spaces-be-effective

suggests that there may be some benefit to linux default round-robin behavior if I add swap on 2 equal ssd "like raid0 with the performance boost" by assigning both swap partitions equal priority...

this is all new to me so want to make sure I do it right and see if it helps... main thing is to move ceph mon over to new drive so it is not taking up space on pve root
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!