Ceph OSD Unable to find storage

yyyy

Member
Nov 28, 2023
66
2
8
Hello,

I have freshly installed proxmox 8.1.1 and formatted drive as XFS, now for some reason when clicking the create OSD button within the Ceph panel of the web gui in the OSD section, it keeps saying "No Disks unused" but when I run lsblk I can see 1.7TB is free on the drive?

root@dws-zve-3:~# lsblkNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTSsda 8:0 0 1.8T 0 disk ├─sda1 8:1 0 1007K 0 part ├─sda2 8:2 0 1G 0 part /boot/efi└─sda3 8:3 0 1.8T 0 part ├─pve-swap 252:0 0 7.6G 0 lvm [SWAP] ├─pve-root 252:1 0 96G 0 lvm / ├─pve-data_tmeta 252:2 0 15.9G 0 lvm │ └─pve-data 252:4 0 1.7T 0 lvm └─pve-data_tdata 252:3 0 1.7T 0 lvm └─pve-data 252:4 0 1.7T 0 lvm

Please any help would be greatly appreciated.
 

Attachments

  • dbsergaw4.PNG
    dbsergaw4.PNG
    72.5 KB · Views: 16
  • fv324gfrw.PNG
    fv324gfrw.PNG
    53.1 KB · Views: 14
Last edited:
An OSD needs a whole disk.
Hi thanks for the reply, is there any way to partition the disk so the proxmox install partition takes up 50gb whilst the rest is allocated for Ceph? In my case it’s SDA3 root and swap needs to be in a single 50gb partition but at the moment it’s 100gb
 
In theory you can create an OSD on the command line and give it an existing logical volume.
In practice you do not want to do this. For Ceph to work you need to have multiple OSDs in your nodes, i.e. multiple disks.
I’ve got 6 nodes each with a single hard drive each at 1TB , if I use another separate drive just for ceph then that 1TB gets wasted that’s why I wonder if it’s possible
 
I would always use two small SSDs in the mirror for the operating system and then corresponding SSDs for the OSD.
 
  • Like
Reactions: gurubert
I would always use two small SSDs in the mirror for the operating system and then corresponding SSDs for the OSD.
But I thought Ceph replicates twice or thrice, having Ceph AND zfs sounds pointless?
 
In theory you can create an OSD on the command line and give it an existing logical volume.
In practice you do not want to do this. For Ceph to work you need to have multiple OSDs in your nodes, i.e. multiple disks.
Seriously this sounds too overkill as the 1TB drive just for OS is wasted as I’ve only got 1TB drives, surely sata is fine and should provide the bandwidth for both os and Ceph?
 
Yes, that’s right, CEPH usually replicates its data three times. But CEPH (like ZFS, by the way) wants to be the master of the disc. Both want to set up the discs the way they are needed.

But one has nothing to do with the other. Here you simply separate the operating system from the data disks, making both independent and portable. This is how I design my systems because I have exactly this requirement. Maximum performance, with maximum flexibility and minimal maintenance effort.
 
Seriously this sounds too overkill as the 1TB drive just for OS is wasted as I’ve only got 1TB drives
That's why I wouldn't use 1 TB SSDs for the OS, but 120 GB or a maximum of 240 GB. Then you just have to buy other SSDs or you just fumble your CEPH together in a half-baked way and just live with the corresponding restrictions on maintenance and administration via PVE etc. pp.
 
That's why I wouldn't use 1 TB SSDs for the OS, but 120 GB or a maximum of 240 GB. Then you just have to buy other SSDs or you just fumble your CEPH together in a half-baked way and just live with the corresponding restrictions on maintenance and administration via PVE etc. pp.
Are you sure Ceph will work when ZFS is formatted for two drives dedicated for the OS? I saw a disclaimer saying RAID configurations are blocked by Ceph
 
Are you sure Ceph will work when ZFS is formatted for two drives dedicated for the OS?
I don't really understand what your point is.

The operating system disks have nothing to do with CEPH. It therefore doesn't matter whether the XFS, EXT4, ZFS runs on a hardware RAID, mdadm or something else. For CEPH you use dedicated disks, these are freed from residual data and then included in the cluster as an OSD (without partitions or anything else) and run completely independently and autonomously of each other.

You should not use a hardware RAID controller for CEPH or ZFS. The HBA mode is often fine, but a real HBA is usually best.

What you want to do here is not officially supported, which is why I won't go into such setups in detail.

Maybe that answers your question?
 
  • Like
Reactions: gurubert

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!