Dumb newb question I'm sure, but when creating an OSD with pveceph, will it automatically sort out the CRUSH map stuff when using different sized OSDs or a different number of OSDs per host?
Example:
I've got 4 hosts that have 8 2.5" bays, two I use in RAID1 for Proxmox boot, the other 6 I am or would like to use for OSDs.
I've got another 4 hosts that have 6 2.5" bays, so 4 available for OSDs after Proxmox boot volume RAID1.
Do I aim to match that same amount of storage per host? So on the first 4 hosts I have maybe 4 480GB SSDs, and 2 1TB SSDs, and then on the other 4 hosts I have 4 1TB SSDs each?
Can I just fill them all with 480GB drives and Proxmox/Ceph will figure it out and balance it accordingly?
Or do I just limit it to 4 SSDs per host and waste the extra 2 slots on the hosts that have 6 available bays?
I read that Ceph prefers similar/identical hardware in a pool, so my assumption is the latter, but I thought I'd ask in case I can somehow take advantage of the additional capacity.
Thanks!
Example:
I've got 4 hosts that have 8 2.5" bays, two I use in RAID1 for Proxmox boot, the other 6 I am or would like to use for OSDs.
I've got another 4 hosts that have 6 2.5" bays, so 4 available for OSDs after Proxmox boot volume RAID1.
Do I aim to match that same amount of storage per host? So on the first 4 hosts I have maybe 4 480GB SSDs, and 2 1TB SSDs, and then on the other 4 hosts I have 4 1TB SSDs each?
Can I just fill them all with 480GB drives and Proxmox/Ceph will figure it out and balance it accordingly?
Or do I just limit it to 4 SSDs per host and waste the extra 2 slots on the hosts that have 6 available bays?
I read that Ceph prefers similar/identical hardware in a pool, so my assumption is the latter, but I thought I'd ask in case I can somehow take advantage of the additional capacity.
Thanks!