What to do with Extra Disk?

Illydth

New Member
Aug 23, 2022
8
2
1
So first time poster, and new to Proxmox. That means a couple questions followed by the actual problem. I've TL/DR'd the post at the top for ease of reading.

The TL;DR of the post:
* Question: Why 100G root disk for Debian by Default? Seems excessive.
* Question: Why is a mostly unconfigured proxmox installation with Ceph showing 14 partitions via FDISK? (Really this is "what are the /dev/mapper/rbd* partitions and why are there so many of them at so small a size?)
* Problem: Since Proxmox installs on harddisk (I will refrain from comments on this but bleh) I now have a ton of extra harddisk that CEPH won't read/recognize and doesn't want to use that isn't shared across cluster in an "LVM Thin Pool" format. What do I do with it?


My Setup:
Lets start with my homelab setup: 3xDell r410 w/ 4 Spinning Rust in each system: (assume for the moment it's a 500g + 3x1T...one system has 4x1T but whatever). 2 NICs (both 1G): IF1 is connected to the main network (public), IF2 is connected to it's own 1G switch (only thing on there is the backline ceph [private network] traffic).

Ceph is installed and configured on top of 9T of HDD (3x1T in each System) and supplying storage with 2 pools: One for Object and one for CephFS. (Don't know if this is the right thing to do or not, there doesn't seem to be any documentation out there with recommendations on what benefits a "pool" gets you...I have no idea if I should be separating this 9T into 1 or more pools and/or what a "pool" buys me, especially since these are all 7200 RPM Spinning Disk from the same manufacturer).


Request for Info:

First and foremost I'd love an explanation as to why it's installing a 100G root partition for Debian Linux? (This isn't a complaint just a curiosity factor - even in a fortune 200 with 5+ PET Data we don't provision 100G OS Disks)...especially since the base install appears to be ~4G of data.


Second, (also a curiosity factor) what the hell is going on with my disks? FDISK -L shows the below:

Disk /dev/sda: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/sdd: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Disk /dev/mapper/ceph--382c...f6622: 931.51 GiB, 1000203091968 bytes,
Disk /dev/mapper/ceph--ffb7...6df52: 931.51 GiB, 1000203091968 bytes,
Disk /dev/mapper/ceph--e1ca...e48fb: 931.51 GiB, 1000203091968 bytes,
Disk /dev/rbd0: 75 GiB, 80530636800 bytes, 157286400 sectors
Disk /dev/rbd1: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk /dev/rbd2: 75 GiB, 80530636800 bytes, 157286400 sectors
Disk /dev/rbd3: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/rbd4: 75 GiB, 80530636800 bytes, 157286400 sectors

- /dev/sda is base harddisk provisioned by proxmox so I get that
- I get the /dev/mapper/pve* disks...those are configured on top of the /dev/sda above.
- /dev/sdb-d are the other drives that CEPH's using, those are fine.
- And Ceph configured across the 3 nodes so I get the /dev/mapper/ceph* disks
? What the hell are the /dev/rbd* devices and why do I have 5 of them that are so small?


Third and Final: My problem:

So /dev/sda's 3rd partition is a 465G LVM Disk, onto which:
Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
have been mapped, so that's 104G of 456G, leaving me ~352G of disk unused in /dev/sda3...

The proxmox installer has "helpfully" provisioned that into an LVM Thin Pool for me (Why an LVM thin pool? I may be behind the times but...I know of very few uses outside of old docker installations that an LVM Thin Pool is used for...?) It's not as if CEPH recognizes this format or wants to deal with it...so...why provision this space AT ALL? And if it's going to be provisioned...why in THIS format, and why not mounted somewhere? It seems to me (and again, I might be behind the times) to be the least useful way of provisioning that extra space.

Because I have 3 systems in the cluster and because proxmox has to be installed on their individual HDDs, that means I now have somewhere in the neighborhood of 1.5 - 2T of harddisk provisioned in this "LVM Thin Pool" format across 3 systems.

It's not shared so using it for ISO or Image files is pretty useless.
It's not recognizable by CEPH so it's not as if it could be provisioned into it's own Ceph Object or CephFS space for something.

So what should I do with it? We're talking 330G/330G/800G...what's the best thing I could do with this space?
 
Last edited:
* Question: Why 100G root disk for Debian by Default? Seems excessive.
default installation makes best possible guess to try to keep everyone happy, from home user running on Nuc to someone who has access to 3 nodes. As with most any software you have option to customize the installation to suite your needs.
The root partition is used for default "local" storage - directory/file type - that can contain qcow, iso, templates, snippets. If you feel its excessive - customize your installation.

"what are the /dev/mapper/rbd* partitions and why are there so many of them at so small a size?)
have you created VM with disks? Most likely those are them. The Ceph RBD is block level layer protocol. In order to "pass-through" the disk to VM it first needs to be attached to the hypervisor.

Problem: Since Proxmox installs on harddisk (I will refrain from comments on this but bleh) I now have a ton of extra harddisk that CEPH won't read/recognize and doesn't want to use that isn't shared across cluster in an "LVM Thin Pool" format. What do I do with it?
Really whatever you want. Its in LVM pool that, again by default, it is used by "lvm-thin" storage - space will get sliced from there and attached to VMs should you decide to use that storage. You have all the tools available to you to change it (google LVM management).

1G switch (only thing on there is the backline ceph [private network] traffic).
Ceph is installed and configured on top of 9T of HDD (3x1T in each System)
Please keep your performance expectations extremely down. Both of these are against any and all best practices in 2022.
First and foremost I'd love an explanation as to why it's installing a 100G root partition for Debian Linux? (This isn't a complaint just a curiosity factor - even in a fortune 200 with 5+ PET Data we don't provision 100G OS Disks)...especially since the base install appears to be ~4G of data.
Does this "fortune 200" have a base of users that attempt to install Hypervisors on hardware assembled from ebay surplus, ten+ year old servers, Nucs, mini-pc's, latest and greatest CPU just hitting the market, gaming motherboards with liquid cooling? I suspect you have rules and procedures in place that regulate the hardware, OS and application deployment. PVE does its best with default installation, using % of the root disk for default partitions/pools. You can modify these assumptions both during and post installation.

Here is an example of Cloud Provider applying its size customization when deploying Proxmox and users complaining its too small:
https://forum.proxmox.com/threads/root-user-only-20gb.107456/#post-492679


Second, (also a curiosity factor) what the hell is going on with my disks? FDISK -L shows the below:
You have installed a Hyper-Converged Hypervisor, simultaneously running local and distributed storage. There is some complexity involved.
Have you reviewed any of the following:
https://pve.proxmox.com/wiki/Installation
https://pve.proxmox.com/wiki/Storage
https://pve.proxmox.com/wiki/Storage:_Directory

The proxmox installer has "helpfully" provisioned that into an LVM Thin Pool for me (Why an LVM thin pool? I may be behind the times but...I know of very few uses outside of old docker installations that an LVM Thin Pool is used for...?)
https://pve.proxmox.com/wiki/Storage - lvm-thin uses are described here. In short, its for block storage that can be used by VMs and Containers.

It seems to me (and again, I might be behind the times) to be the least useful way of provisioning that extra space.
https://pve.proxmox.com/pve-docs/chapter-pvesm.html

It's not shared so using it for ISO or Image files is pretty useless.
you can delete it and expand your root partition to use it for ISO. In Proxmox "image" is usually a references to qcow or raw disk file.

It's not recognizable by CEPH so it's not as if it could be provisioned into it's own Ceph Object or CephFS space for something.
You can make Ceph (no capitalization) use partitions. However, that's an advanced level administration and is not covered by Proxmox (gui or cli). You will need to do some research on that. Having said that, adding a slice of your spinning rust that is used by OS to a Ceph pool is a really bad idea.

So what should I do with it? We're talking 330G/330G/800G...what's the best thing I could do with this space?
Once the use-case for this installation is defined - I am sure the answer will become clear.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
The PVE installer also allows you to define how big you want your partitions. See chapter "Advanced LVM Configuration Options" here: https://pve.proxmox.com/wiki/Installation
So would be no problem to tell PVE to for example only create a 20GB root disk + 12GB swap with remaining space unpartitioned.
 
Yea, I was having a hard enough time getting the GUI installer to come up on my Rack Console (1024x768). Once I got there I sort of breezed through install, didn't realize there were advanced options on disk sizing.

Thanks to both / all of you for the responses.

Re: /dev/rdb* yep, that makes sense. I see them being added to as I add VMs. Thanks!

Re: Thin-Pools, Ok, so there is no "out of the box use" for the thin pools as they're created. I was afraid that something in the base (Like the LXC Containers function) was using them. Docker used to setup it's container FS storage on Thin Pool so I was concerned about getting rid of them.

Re: Performance: Yea, I'm aware the recommendation is SSDs and 10G+ networks. With much of the conversation you get here and on the homelab areas you'd think it was utterly impossible to run Proxmox+Ceph without 10g+ and SSDs in half your bays. I'm sure at some point I'll hit a point at which what I've got causes some kind of performance issue, but with under a dozen VMs running mostly family web traffic and a couple low end services for friends/family Ceph nor my hardware is currently breaking any kind of a sweat at all even with it being spinning disk and 1G backbone.

Does this "fortune 200" have a base of users that attempt to install Hypervisors on hardware assembled from ebay surplus, ten+ year old servers, Nucs, mini-pc's, latest and greatest CPU just hitting the market, gaming motherboards with liquid cooling? I suspect you have rules and procedures in place that regulate the hardware, OS and application deployment. PVE does its best with default installation, using % of the root disk for default partitions/pools. You can modify these assumptions both during and post installation.
Sure, there's a point to be made here absolutely. Obviously the kinds of things you're going to do with a server in a professional setting isn't the kind of things you're going to be doing in a random home setting with random hardware...Linux grew up on being installed on whatever the hell was about to be thrown in the trash the next day.

Without trying to be argumentative though: I'm not asking why Poxmox/Debian doesn't install package X, support technology Y, or why it doesn't put root on Filesystem Z. I'm asking why it installs on 100G of disk...because at the end of the day it's full install size is 4.5 Gig. It doesn't matter if that 4.5 GIg is installed on ebay surplus, a mini-pc, a $20000 dell server, a water cooled gaming pc or a raspberry pi...4.5G is 4.5G and dropping that on a 100G partition seems...excessive.

"Percentage of disk it's installed to" however is the answer that sates my curiosity. Foul on me for not paying closer attention to the disk configuration options during install...and your point is well taken about the broad use of the software.

Mostly I was just looking for some general advice on what most people do with the disk Proxmox leaves behind with installation on their servers.

Anyway, thanks again for taking the time to respond to this, I really do appreciate it!
 
Last edited:
I'm asking why it installs on 100G of disk...because at the end of the day it's full install size is 4.5 Gig. It doesn't matter if that 4.5 GIg is installed on ebay surplus, a mini-pc, a $20000 dell server, a water cooled gaming pc or a raspberry pi...4.5G is 4.5G and dropping that on a 100G partition seems...excessive.
Often the PVE server only got a single disk that needs to be used for everything. Keep in mind that LVM-Thin is block storage where no files/folders can be stored (atleast not without CLI to create some custom LVs, partitioning, formating, mounting). So only place to store stuff like backups, ISOs, templates, ... is your root partition. I guess when using something like 16GB for the root partition then people would complain why it is so small and that they can't upload a ISO to install a VM because the root filesystem is then running out of space. And as soon as the root filesystem runs out of space PVE will stop working. People are always complaining here that PVE stops working and don't get that is because they filled up the root filesystem, because no one actually monitors the filesystems. I guess that would be way worse when lowering the size of the root filesystem.
 
Last edited:
  • Like
Reactions: Neobin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!