Ceph Disks

Mar 2, 2023
14
0
1
USA
Ok, new to ProxMox, Ceph and Linux!

I have (3) servers with (2) SSDs in each. ProxMox and CEPH are installed. ProxMox has the entire first SSD and Ceph has the second on each server.

My question is can I reduce the ProxMox partition on the first SSD and give the balance to CEPH? Or at least re-install ProxMox and use only a small part of the first SSD and again give the balance to Ceph?

Thanks!
 
Ok, new to ProxMox, Ceph and Linux!

I have (3) servers with (2) SSDs in each. ProxMox and CEPH are installed. ProxMox has the entire first SSD and Ceph has the second on each server.

My question is can I reduce the ProxMox partition on the first SSD and give the balance to CEPH? Or at least re-install ProxMox and use only a small part of the first SSD and again give the balance to Ceph?

Thanks!
No one wants to answer??
 
My question is can I reduce the ProxMox partition on the first SSD and give the balance to CEPH?
Yes. its possible.

Or at least re-install ProxMox and use only a small part of the first SSD and again give the balance to Ceph?
thats the same question.

The reason no one answered because the question is absurd. If you dont know WHY its absurd, you havent done the bare minimum research of what ceph is or does. start reading.
 
Thanks for the help.

I have read the ProxMox and Ceph documentation. All of the examples use complete disks for OSDs so I was not sure if I could use a partition.

I am new to Linux so sorry for asking a dumb question.

The first question is can I reduce the ProxMox used space, so different from the second question.
 
you can (I guess :)), but you should not (definitly).
reducing fs is not that easy though.

if you want the system ssd for ceph you are better off bying a small new ssd for the system and use the former system ssd as osd.


if you do your system just for testing and fun, do it and try it. that's what testing-and-fun-systems are for.... otherwise don't :)
 
  • Like
Reactions: MoniPM and UdoB
you can (I guess :)), but you should not (definitly).
reducing fs is not that easy though.

if you want the system ssd for ceph you are better off bying a small new ssd for the system and use the former system ssd as osd.


if you do your system just for testing and fun, do it and try it. that's what testing-and-fun-systems are for.... otherwise don't :)
The "servers" are Lenovo workstations with room for only (2) NVMe drives and I want to maximize the storage however I may just give up and leave one of the 2TB drives for ProxMox. Just seems like a waste.

I run "production" home VMs, 3CX, Home Assistant etc. but using ProxMox is for the learning experience.
 
You can use, for example, SATA USB enclosure with a small SATA SSD inside for OS and dedicate both your NVMe drives for ceph. That's what I had to do when I faced similar limitations.

I am pretty sure the Proxmox GUI would not allow you to use partitions for ceph, even if that is possible with low-level commands. I would avoid any non-supported (and consequently not tested) ceph configuration.
 
You can use, for example, SATA USB enclosure with a small SATA SSD inside for OS and dedicate both your NVMe drives for ceph. That's what I had to do when I faced similar limitations.

I am pretty sure the Proxmox GUI would not allow you to use partitions for ceph, even if that is possible with low-level commands. I would avoid any non-supported (and consequently not tested) ceph configuration.
That's a good idea, thanks!
 
You can use, for example, SATA USB enclosure with a small SATA SSD inside for OS and dedicate both your NVMe drives for ceph. That's what I had to do when I faced similar limitations.

I am pretty sure the Proxmox GUI would not allow you to use partitions for ceph, even if that is possible with low-level commands. I would avoid any non-supported (and consequently not tested) ceph configuration.
Have you had long term success using a USB SSD? I like the idea but it scares me putting ProxMox on an external drive. I have looked at USB RAID cases and thought about (2) USB drives and using ZFS RAID 1.

Thoughts?
 
Theoretically you can shrink a zfs volume by migrating it to a smaller disk (actually kind of like adding the smaller disk in a RAID-0 and removing the original) and then back. But it's not for the faint of heart:

https://askubuntu.com/questions/1231355/how-can-i-shrink-a-zfs-volume-on-ubuntu-18-04

If you want a custom partition layout, you can't use the normal installer, officially you'd have to install Debian first:

https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster

It really depends on how comfortable you are with the command line and of how much you need support. If you are comfortable you could try installing on a small SSD, mirror to a partition on the 2TB second disk and make that bootable, then split the mirror, remove the small SSD and replace it with another big SSD. Then you can use the rest for Ceph.

Alternatively you can also tell Ceph to create a big file for its data, that way you can leave the partitions as-is. I don't think the GUI supports this though and I don't know what the performance and extra load on the SSD is.

Either way, be sure you have backups!
 
Have you had long term success using a USB SSD? I like the idea but it scares me putting ProxMox on an external drive. I have looked at USB RAID cases and thought about (2) USB drives and using ZFS RAID 1.

Thoughts?
I used USB sticks for OS few years ago, and that was working fine for about a year, never disconnected or hang up, but then one of the servers started to produce odd errors during updates (so don't recommend USB sticks) . I switched to a USB SATA enclosure then but only for a few months, so I cannot tell about real long-term issues, I had no issues at all when used that.

I use a few external USB HDDs for my cephfs storage, and some of them do disconnect once in a few months, and I need to reboot to recover. I guess USB reliability depends on the hardware...

I don't think a multi-disk USB enclosure is a good idea, as the enclosure itself would be the single point of failure. If you want redundancy I think using lvm mirroring to independent USB drives connected to different port would be better. To me it is an overkill for a home environment (unless you want that learning experience).
 
  • Like
Reactions: MoniPM
Well, some progress!

I used a spare system with (1) 256GB SSD and installed ProxMox on just the first 30GB. Then I wiped the balance and was able to create a 208GB Ceph OSD using the PM GUI.

Now I may try to reduce the LVM partition on the live servers or just re-install PM, if I can remember the steps I took to get this far. I have several VMs that I do not want to re-create so I will try to move them to my Windows box first.


1684376393483.png
 
Hi @MoniPM

Great job noticing that. I had an older lab cluster that was running Proxmox 7.0 and I checked that it could not add a partition as OSD, but after I updated to the current 7.4.3, I too was able to add that partition as an OSD. So it became supported somewhere between those versions.
More options is always good...
 
  • Like
Reactions: MoniPM
I can't seem to reduce the size of the LVM so going to just re-install PM and not use all of the disk. For a life-long Windows Server guy, I find Linux very difficult. I know that's not a good thing to say in the forum.

I do like ProxMox a lot, just not Linux. :)
 
I did manage to reduce the main LVM partition and allocate the space to Ceph.

1685038663714.png

But I deleted the LVM-Thin partition in the process. I'm not sure what local partitions that I really need because I want most things in the Ceph space.

I will probably start over and rebuild the servers knowing what I want now.

This has been a learning experience for sure.
 
I did manage to reduce the main LVM partition and allocate the space to Ceph.

View attachment 50802

But I deleted the LVM-Thin partition in the process. I'm not sure what local partitions that I really need because I want most things in the Ceph space.

I will probably start over and rebuild the servers knowing what I want now.

This has been a learning experience for sure.

You don't need those extra logical volumes if you plan to use ceph. You also don't need that much space for your root volume. I use 32GB as a root partition, and it seems to be enough. You might also want to leave 8-16GB for swap, so I think that 40-64GB for the root LVM partition should be more than enough
If you want more learning experience, I suggest would to partition both NVMe drives and setup a mirror for the root volume group.
 
You don't need those extra logical volumes if you plan to use ceph. You also don't need that much space for your root volume. I use 32GB as a root partition, and it seems to be enough. You might also want to leave 8-16GB for swap, so I think that 40-64GB for the root LVM partition should be more than enough
If you want more learning experience, I suggest would to partition both NVMe drives and setup a mirror for the root volume group.
Thanks!

So, I can set up only the root partitions in RAID 1 and if one drive fails the system will still boot?

I was thinking that having (3) servers in the cluster was redundancy enough but I like your idea. Also, having symmetrical Ceph partitions does appeal to my OCD! :)
 
Ok, so I took mfed's suggestion and set up two drives as ZFS RAID 1 for booting but only used 64GB of the space and let Ceph use the rest using each drive as an OSD. It seems to be working fine!


1685395269105.png
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!