Hi, this evening I did upgrade to proxmox 6.3 and ceph octopus.
after restarting osd, osd process started, but osd is in offline mode, and I don;t see in osd log that osd is doimng anything, it looks like normal start but osd is not considered up and in.
I will leave it like that duting the night, to see if it is doing format conversion that is mentioned in manual for upgrade from Nautilus to Octopus.
Also I deleted one osd, and attempted to create it again
resalt was unsuccesfull with result:
pveceph osd create /dev/sdb
create OSD on /dev/sdb (bluestore)
wipe disk/partition: /dev/sdb
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.506091 s, 414 MB/s
--> AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv'
command 'ceph-volume lvm create --cluster-fsid 4bcfed01-7c42-470f-99a7-dd54560eb61e --data /dev/sdb' failed: exit code 1
after restarting osd, osd process started, but osd is in offline mode, and I don;t see in osd log that osd is doimng anything, it looks like normal start but osd is not considered up and in.
I will leave it like that duting the night, to see if it is doing format conversion that is mentioned in manual for upgrade from Nautilus to Octopus.
Also I deleted one osd, and attempted to create it again
resalt was unsuccesfull with result:
pveceph osd create /dev/sdb
create OSD on /dev/sdb (bluestore)
wipe disk/partition: /dev/sdb
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.506091 s, 414 MB/s
--> AttributeError: module 'ceph_volume.api.lvm' has no attribute 'is_lv'
command 'ceph-volume lvm create --cluster-fsid 4bcfed01-7c42-470f-99a7-dd54560eb61e --data /dev/sdb' failed: exit code 1