So first time poster, and new to Proxmox. That means a couple questions followed by the actual problem. I've TL/DR'd the post at the top for ease of reading.
The TL;DR of the post:
* Question: Why 100G root disk for Debian by Default? Seems excessive.
* Question: Why is a mostly unconfigured proxmox installation with Ceph showing 14 partitions via FDISK? (Really this is "what are the /dev/mapper/rbd* partitions and why are there so many of them at so small a size?)
* Problem: Since Proxmox installs on harddisk (I will refrain from comments on this but bleh) I now have a ton of extra harddisk that CEPH won't read/recognize and doesn't want to use that isn't shared across cluster in an "LVM Thin Pool" format. What do I do with it?
My Setup:
Lets start with my homelab setup: 3xDell r410 w/ 4 Spinning Rust in each system: (assume for the moment it's a 500g + 3x1T...one system has 4x1T but whatever). 2 NICs (both 1G): IF1 is connected to the main network (public), IF2 is connected to it's own 1G switch (only thing on there is the backline ceph [private network] traffic).
Ceph is installed and configured on top of 9T of HDD (3x1T in each System) and supplying storage with 2 pools: One for Object and one for CephFS. (Don't know if this is the right thing to do or not, there doesn't seem to be any documentation out there with recommendations on what benefits a "pool" gets you...I have no idea if I should be separating this 9T into 1 or more pools and/or what a "pool" buys me, especially since these are all 7200 RPM Spinning Disk from the same manufacturer).
Request for Info:
First and foremost I'd love an explanation as to why it's installing a 100G root partition for Debian Linux? (This isn't a complaint just a curiosity factor - even in a fortune 200 with 5+ PET Data we don't provision 100G OS Disks)...especially since the base install appears to be ~4G of data.
Second, (also a curiosity factor) what the hell is going on with my disks? FDISK -L shows the below:
Disk /dev/sda: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/sdd: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Disk /dev/mapper/ceph--382c...f6622: 931.51 GiB, 1000203091968 bytes,
Disk /dev/mapper/ceph--ffb7...6df52: 931.51 GiB, 1000203091968 bytes,
Disk /dev/mapper/ceph--e1ca...e48fb: 931.51 GiB, 1000203091968 bytes,
Disk /dev/rbd0: 75 GiB, 80530636800 bytes, 157286400 sectors
Disk /dev/rbd1: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk /dev/rbd2: 75 GiB, 80530636800 bytes, 157286400 sectors
Disk /dev/rbd3: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/rbd4: 75 GiB, 80530636800 bytes, 157286400 sectors
- /dev/sda is base harddisk provisioned by proxmox so I get that
- I get the /dev/mapper/pve* disks...those are configured on top of the /dev/sda above.
- /dev/sdb-d are the other drives that CEPH's using, those are fine.
- And Ceph configured across the 3 nodes so I get the /dev/mapper/ceph* disks
? What the hell are the /dev/rbd* devices and why do I have 5 of them that are so small?
Third and Final: My problem:
So /dev/sda's 3rd partition is a 465G LVM Disk, onto which:
Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
have been mapped, so that's 104G of 456G, leaving me ~352G of disk unused in /dev/sda3...
The proxmox installer has "helpfully" provisioned that into an LVM Thin Pool for me (Why an LVM thin pool? I may be behind the times but...I know of very few uses outside of old docker installations that an LVM Thin Pool is used for...?) It's not as if CEPH recognizes this format or wants to deal with it...so...why provision this space AT ALL? And if it's going to be provisioned...why in THIS format, and why not mounted somewhere? It seems to me (and again, I might be behind the times) to be the least useful way of provisioning that extra space.
Because I have 3 systems in the cluster and because proxmox has to be installed on their individual HDDs, that means I now have somewhere in the neighborhood of 1.5 - 2T of harddisk provisioned in this "LVM Thin Pool" format across 3 systems.
It's not shared so using it for ISO or Image files is pretty useless.
It's not recognizable by CEPH so it's not as if it could be provisioned into it's own Ceph Object or CephFS space for something.
So what should I do with it? We're talking 330G/330G/800G...what's the best thing I could do with this space?
The TL;DR of the post:
* Question: Why 100G root disk for Debian by Default? Seems excessive.
* Question: Why is a mostly unconfigured proxmox installation with Ceph showing 14 partitions via FDISK? (Really this is "what are the /dev/mapper/rbd* partitions and why are there so many of them at so small a size?)
* Problem: Since Proxmox installs on harddisk (I will refrain from comments on this but bleh) I now have a ton of extra harddisk that CEPH won't read/recognize and doesn't want to use that isn't shared across cluster in an "LVM Thin Pool" format. What do I do with it?
My Setup:
Lets start with my homelab setup: 3xDell r410 w/ 4 Spinning Rust in each system: (assume for the moment it's a 500g + 3x1T...one system has 4x1T but whatever). 2 NICs (both 1G): IF1 is connected to the main network (public), IF2 is connected to it's own 1G switch (only thing on there is the backline ceph [private network] traffic).
Ceph is installed and configured on top of 9T of HDD (3x1T in each System) and supplying storage with 2 pools: One for Object and one for CephFS. (Don't know if this is the right thing to do or not, there doesn't seem to be any documentation out there with recommendations on what benefits a "pool" gets you...I have no idea if I should be separating this 9T into 1 or more pools and/or what a "pool" buys me, especially since these are all 7200 RPM Spinning Disk from the same manufacturer).
Request for Info:
First and foremost I'd love an explanation as to why it's installing a 100G root partition for Debian Linux? (This isn't a complaint just a curiosity factor - even in a fortune 200 with 5+ PET Data we don't provision 100G OS Disks)...especially since the base install appears to be ~4G of data.
Second, (also a curiosity factor) what the hell is going on with my disks? FDISK -L shows the below:
Disk /dev/sda: 465.76 GiB, 500107862016 bytes, 976773168 sectors
Disk /dev/sdb: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/sdc: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/sdd: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Disk /dev/mapper/ceph--382c...f6622: 931.51 GiB, 1000203091968 bytes,
Disk /dev/mapper/ceph--ffb7...6df52: 931.51 GiB, 1000203091968 bytes,
Disk /dev/mapper/ceph--e1ca...e48fb: 931.51 GiB, 1000203091968 bytes,
Disk /dev/rbd0: 75 GiB, 80530636800 bytes, 157286400 sectors
Disk /dev/rbd1: 50 GiB, 53687091200 bytes, 104857600 sectors
Disk /dev/rbd2: 75 GiB, 80530636800 bytes, 157286400 sectors
Disk /dev/rbd3: 200 GiB, 214748364800 bytes, 419430400 sectors
Disk /dev/rbd4: 75 GiB, 80530636800 bytes, 157286400 sectors
- /dev/sda is base harddisk provisioned by proxmox so I get that
- I get the /dev/mapper/pve* disks...those are configured on top of the /dev/sda above.
- /dev/sdb-d are the other drives that CEPH's using, those are fine.
- And Ceph configured across the 3 nodes so I get the /dev/mapper/ceph* disks
? What the hell are the /dev/rbd* devices and why do I have 5 of them that are so small?
Third and Final: My problem:
So /dev/sda's 3rd partition is a 465G LVM Disk, onto which:
Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
have been mapped, so that's 104G of 456G, leaving me ~352G of disk unused in /dev/sda3...
The proxmox installer has "helpfully" provisioned that into an LVM Thin Pool for me (Why an LVM thin pool? I may be behind the times but...I know of very few uses outside of old docker installations that an LVM Thin Pool is used for...?) It's not as if CEPH recognizes this format or wants to deal with it...so...why provision this space AT ALL? And if it's going to be provisioned...why in THIS format, and why not mounted somewhere? It seems to me (and again, I might be behind the times) to be the least useful way of provisioning that extra space.
Because I have 3 systems in the cluster and because proxmox has to be installed on their individual HDDs, that means I now have somewhere in the neighborhood of 1.5 - 2T of harddisk provisioned in this "LVM Thin Pool" format across 3 systems.
It's not shared so using it for ISO or Image files is pretty useless.
It's not recognizable by CEPH so it's not as if it could be provisioned into it's own Ceph Object or CephFS space for something.
So what should I do with it? We're talking 330G/330G/800G...what's the best thing I could do with this space?
Last edited: