Re-partion Boot drive - Use remaining space for OSD - After Install


New Member
Jul 27, 2023
So i have 4 nodes in my cluster with 500GB spinning rust in them for boot that i fully allocated to the OS before i understood how awesome CEPH is with even spinning disks.

I would like to leave a 50GB (open to recommendation) partition in place for the host OS but then be able to use the reaming space for OSD.

This would allow me to have 1 NVME, 1SSD, and 1 HHD in each of those three nodes which seems like a nice setup.

This is a LAB setup at home for my tinkering so not expecting blister fast speeds, just trying to get the most out of the gear i have an since im using CEPH for ALL shared storage, seems waste-full to have almost 2TB of disk not being used.

If you think this is a good idea, how do i do this WITHOUT doing a fresh install on those nodes? Or ne install is needed, can i use the backup function and then restore to the same node post a format and repartion of the drive?

Write up for that i can read?
NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                                                                                                     8:0    0 465.8G  0 disk
├─sda1                                                                                                  8:1    0  1007K  0 part
├─sda2                                                                                                  8:2    0     1G  0 part /boot/efi
└─sda3                                                                                                  8:3    0 464.8G  0 part
  ├─pve-swap                                                                                          253:1    0     8G  0 lvm  [SWAP]
  ├─pve-root                                                                                          253:2    0    96G  0 lvm  /
  ├─pve-data_tmeta                                                                                    253:3    0   3.4G  0 lvm 
  │ └─pve-data                                                                                        253:5    0 337.9G  0 lvm 
  └─pve-data_tdata                                                                                    253:4    0 337.9G  0 lvm 
    └─pve-data                                                                                        253:5    0 337.9G  0 lvm 
sr0                                                                                                    11:0    1  1024M  0 rom 
nvme0n1                                                                                               259:0    0 465.8G  0 disk
└─ceph-- NAME REMOVED                                                                                  0 465.8G  0 lvm

So for example, i wish i had an sda4 with 400ish GB space to use for OSD
So have I understood you setup correctly? 4 nodes each with 500g spinning and 500g nvme? If not correct me please.

Generally you can do this and it probably works fine. However its no common nor recommended setup. You might encounter problems down the road. But since this is a home lab setup I assume that's ok for you.

Resizing the disk:
While LVM lets you do that, it also tells you in the process that it can not guarantee you that it does not corrupt data.
An other option would copy the content of the partitions somewhere, make new partitions and copy the data back. While more work its cleaner

Partition as OSD:
People who tried it in the past encountered some issues, however I tried and ran into none by
  • Creating the partitions with parted using select /dev/sdX and then mkpart primary START END
  • Adding the OSD over the GUI
So it can be done but its more of the tinkering side then the productive one

Also I would strongly encourage you of making a backup before you start!

However you decide tell me if you need help
Last edited:
Thanks for the feedback and my google fu did lead me to that post last night which made me feel sad lol.

Yes that is my setup nvme + 500gb rust that i wanted to partition. I guess ill just leave it as is and ill add a little more space via some SSD's that i have slots for.


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!