Reconfiguration of Proxmox boot / storage ceph

itlinux

New Member
Jan 7, 2024
16
1
3
Hello guys, I have a server that is now set as:

2 disk Mirror with zfs (SSD). for boot and 4 disk for ceph. I have an option to install nvme so my plan was to do the following:

brake the mirror, use the nvme as mirror and then use the SSD drive to enlarge the ceph pool. Any suggestions on that or if I need to reinstall how can I do that without loosing any data. I have 4 nodes I could move one node at a time but I would like to be as transparent as possible without data disruption. Thanks.
 
Is there a reason why you want to use ssds for ceph, are they larger then the nvmes?
 
Is there a reason why you want to use ssds for ceph, are they larger then the nvmes?
the entire box is ssd. the NVMe I have not bought them yet the performance of the NVMe is better. So moving the OS to nvme, will allow me to use all the same SSDs drive for ceph to simply use a standard there in case one drive dies the guys at the Datacenter can just pull it out and replace it instead of me going there (remote hands).
 
  • Like
Reactions: jsterr
I cant tell you the correct commands, but I personally would recommend you to try this out in a virtualized PVE enviroment with 2 virtual disks in a zfs. In theory you should go this way:

  • * remove one ssd from the mirror
  • * replace removed ssd with nvme
  • * tell zfs to use the nvme in the zfs mirror (replacement)
  • * wait until zfs is healthy again
  • * make sure you update the new disk with proxmox-boot-tool refresh (not sure if needed) and initramfs-update -u
  • * Remove the first boot-ssd and try to boot only from the single nvme (to see if it works)
  • * if it works, replace the ssd with the nvme in your zfs-config (replacement)
  • * might need proxmox-boot-tool again
  • * boot from the new ssd (to see if that also works)
  • * use the ssds for ceph

I personally would fix boot first on all 4 nodes and then wipe the old ssds and integrate them on all nodes at the same time, so you dont have that much data movement between nodes (on the nodes itself, you will have a rebalancing). As you have a cluster and ceph, you should be able to do this without any downtime, if you move ressources to different host(s).
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!