upgrade a raid card, do I need to reinstall OSD disks?

Ting

Member
Oct 19, 2021
102
4
23
57
Hi,

Currently I have a proxmox node, it has a raid card which was setup as JBOD, and I have 2 OSD disks on it. I want to change this raid card to IT card, my question is how to handle my existing two OSD disks?

1. I could, before I swap this raid card, destroy these two OSD disks, and reinstall OSD after.
2. Or, I just OUT these two OSD disks, and after I change out my old raid card with a IT card, and inset two OSD disks, and hoping they will be automatically recognized by the system, and I can IN these two OSD disks, and system will rebalance by itself>?

What should I do? many thanks in advance.
 
What should I do?

First: make backups. Never do operations like this one without tested backups!

Second: Ceph is a reliable technology. Usually it is setup with redundancy in mind. Destroying one host with all its OSDs should be possible w/o data loss - as long as the minimum requirements are fulfilled and (at least) default settings are used. Note that there are pitfalls... see also https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/

Your problem is that it is not clear if JBOD actually presents the very same geometry of a physical disk to PVE. If yes, then nothing will change when going native controller. Only the manufacturer does know this.

The disk naming (sda, sdb) may change, of course. That's why always "use /disk/by-id/xyz or disk/by-uuid" is recommended.

Test it: take a look at the current presentation, while still at the JBOD controller. Do that with basic tools like smartctl -i /dev/sda ; fdisk -l /dev/sda ; sfdisk -d /dev/sda. Then "down" that OSD, remove the physical disk, put it into another computer, repeat the same command and compare the output.

Oh..., did I say "make backups first"?