[SOLVED] ceph nautilus move luminous created osd to another node.

RobFantini

Famous Member
May 24, 2012
2,022
107
133
Boston,Mass
Hello
with prior version of ceph I could stop an osd , then move to other node and the osd would mount. I forget if I also pressed out...

with pve6 and ceph-nautilus that method does not work. I have tried stop then move. stop , out then move.

we are moving as the motherboard needs to be replaced . perhaps the bad hardware is causing the issue.

In any case what procedure should be used to move osds?
 
Last edited:
to make sure that bad hardware was not the cause, on another node I tried to move an osd with: stop out. then moved the drive . had the same result.
 
Last edited:
so it looks like the only way to safely move involves restarting the source and target systems.

I'll be doing the osd moves in about 10 hours. If there are any suggestions of a better way to try please reply.
 
i tried the following:

on node that has the osd's: at pve web page: for each osd pressed stop and out

shutdown the node that had the osd's

remove the osd's

start the node that had the osd's.

put an osd to another node.

restart the other node.

that did not work.

at pve ceph > osd the osd's still show up as down and out at the original node.


so do I need to do : stop, out and destroy the move and create a new osd ?
 
stop/out/destroy , then move create error:
Code:
Parameter verification failed. (400)

bluestore: property is not defined in schema and the schema does not allow additional properties
journal_dev: property is not defined in schema and the schema does not allow additional properties

so i tried to zap it
Code:
ceph-disk zap /dev/sdl
-bash: ceph-disk: command not found

i think the release notes for pve6 / ceph stated that some commands were eliminated.

on to search. any suggestions - please reply.
 
OK thanks. did this:
Code:
ceph-volume lvm  zap /dev/sdl

still have issue when trying to create osd:
Code:
Parameter verification failed. (400)

journal_dev: property is not defined in schema and the schema does not allow additional properties
bluestore: property is not defined in schema and the schema does not allow additional properties
 
so trying to use the same osd# as i have the drive labeled:
Code:
# ceph-volume lvm create --osd-id 42 --data /dev/sdl
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd tree -f json
-->  RuntimeError: The osd ID 42 is already in use or does not exist.

will work on removing 42 1st..
 
Code:
ceph-volume lvm create  --data /dev/sdl
worked and used 42 as osd number...

so i'll assume the pve web interface to osd create needs a patch or 2. let me know if i can test ..
 
Parameter verification failed. (400) journal_dev: property is not defined in schema and the schema does not allow additional properties bluestore: property is not defined in schema and the schema does not allow additional properties
this looks like your gui tries to execute the old api calls, do all nodes have the same version? browser cache?
 
this looks like your gui tries to execute the old api calls, do all nodes have the same version? browser cache?

all have same version . PVE Manager Version pve-manager/6.0-5/f8a710d7

I'll try to refresh cache in firefox in fact i'll stop and start firefox 1st , then try stop/out/move on the next osd.
 
issue was not cache on firefox.
i did these:

restart firefox

osd: stop , out . then moved to other node.

pve: ceph > osd reload

the osd still shows as down and out at the original node.
 
the above did not work.

drives are mounted to tmpfs:
Code:
tmpfs            tmpfs      40G   24K   40G   1% /var/lib/ceph/osd/ceph-26
tmpfs            tmpfs      40G   24K   40G   1% /var/lib/ceph/osd/ceph-47

i rebooted and it is the same.

so all 6 nodes have the moved osd's at tmpfs

this can not be normal?
 
well I have 3 new osd's to add.

the 1-st one i added at pve web page. it mounts at tmpfs. so perhaps that is normal?

I do not think so looking at the following:

osd 70 added at pve:
Code:
tmpfs                       tmpfs      48G   52K   48G   1% /var/lib/ceph/osd/ceph-70

fdisk shows no partitions:
Code:
# fdisk -l /dev/sdo
Disk /dev/sdo: 372.6 GiB, 400088457216 bytes, 781422768 sectors
Disk model: INTEL SSDSC2BX40
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

i'll start moving vm's off of this pool.
 
well I have 3 new osd's to add.

the 1-st one i added at pve web page. it mounts at tmpfs. so perhaps that is normal?

I do not think so looking at the following:

osd 70 added at pve:
Code:
tmpfs                       tmpfs      48G   52K   48G   1% /var/lib/ceph/osd/ceph-70

fdisk shows no partitions:
Code:
# fdisk -l /dev/sdo
Disk /dev/sdo: 372.6 GiB, 400088457216 bytes, 781422768 sectors
Disk model: INTEL SSDSC2BX40
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

i'll start moving vm's off of this pool.
This is normal when using volumes. if you look inside of ceph-70 directory you will find a symlink named block pointing to the actual device.
 
This is normal when using volumes. if you look inside of ceph-70 directory you will find a symlink named block pointing to the actual device.

thanks for the reply.

after i recreated the drives we have a lot less storage avail on the pool. at start of the day the rdb_ssd had 11T now it has about 8T . so i thought the issue was related to tmpfs mount..

it has been a long day so i may be missing something normally obvious...

perhaps a keyring issue or something? we do not use keyrings at /etc/pve/priv/ceph/ .
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!