How many OSDs?

12 is just a base line recommendation.

From the looks of it you will have 6 OSDs? This would be a small cluster but no issues and would work and function. It just means if one node goes down your only have 2 replication while you repair or bring a new node online.
 
Sorry, where did I say that?

All I stated that you said you have 3 servers each with 2 disks. Therefore you have the capacity for 6 OSD's.

That will work fine, only issue being with a replication of 2 if you lose a whole host CEPH wont be able to automatically repair and will run on a replication of 2 till you can bring the host back online. If you have atleast 4 hosts then you can lose a host and still have the 3 host min required for 3 way replication of all PG's.
 
We plan to get a fourth host. We have one empty drive on each host. I have a total of 4 drive bays on each server. Is it worth to add additional disks in the empty drives? If so how many OSDs would that be?
 
You can create one OSD Per a disk, the amount of OSD you require really depends on your usage case.
How big are the disk? How much useable storage do you require on CEPH?

6 OSD Will work, 4 nodes would be better, but it is really down to what you require from the cluster in performance and capacity.
 
You can create one OSD Per a disk, the amount of OSD you require really depends on your usage case.
How big are the disk? How much useable storage do you require on CEPH?

6 OSD Will work, 4 nodes would be better, but it is really down to what you require from the cluster in performance and capacity.

Hello. We've moved on to creating a pool, but I'm confused by what the below options mean. What are size, and Min. Size. and Crush rule?

error1.JPG

We want to create a PG_NUM of 512 as we do plan to have a total of 12 OSDs once we upgrade our cluster, but we get this error. What would be the recommended configuration?

error.JPG
 

Attachments

  • default pool.JPG
    default pool.JPG
    25 KB · Views: 5
Start with 256, 512 is too many, even for 12 OSD's 256 should be fine. But you can easily increase it to 512 in the future.
 
Start with 256, 512 is too many, even for 12 OSD's 256 should be fine. But you can easily increase it to 512 in the future.

thank you. The template is greyed out on the pool. I'm not able to upload an ISO template to the pool.
 
thank you. The template is greyed out on the pool. I'm not able to upload an ISO template to the pool.

Correct an RBD pool can only store RBD for VM disk storage.

Your need to upload the ISO to the standard location /var/lib/vz/template/iso
 
Correct an RBD pool can only store RBD for VM disk storage.

Your need to upload the ISO to the standard location /var/lib/vz/template/iso

When I try to migrate my VM I get an error that I cannot migrate.

migrate error.JPG
 
I figured out the error there is a local disk that's mounted on the VM. I needed to un-mount and i can successfully migrate my VMs.
 
  • Like
Reactions: sg90
What i don't understand is how to create multiple pools. There doesn't seem to be an option to select disk size. With multiple applications running in each cluster, how do i create multiple pools? I don't see an option to select pool size.
 
What i don't understand is how to create multiple pools. There doesn't seem to be an option to select disk size. With multiple applications running in each cluster, how do i create multiple pools? I don't see an option to select pool size.

You don't set the disk size on the pool, you set it when you create the RBD disk attached to the VM is when you set the disk size.
 
You don't set the disk size on the pool, you set it when you create the RBD disk attached to the VM is when you set the disk size.

So then how are sizes of multiple pools handled? Each pool has the same amount of space available?
 
Last edited:
So then how are sizes of multiple pools handled? Each pool has the same amount of space available?

Yes, pools free space will be calculated by the OSDs that are available to it and the replication.

For example a Pool of 3:2 and a Pool of 2:1 with the same backing OSDs will show a different available disk space.

The limit is set on the RBD, however you need to monitor pool usage and OSD via ceph osd df, as once an OSD gets near full write I/O will be stopped.

Your see warnings start to appear in ceph -s once an OSD / Pool is getting close.

Saying that, you can set a quota on a pool if for a particular reason you do want to limit that particular pool

ceph osd pool set-quota {pool-name} [max_objects {obj-count}] [max_bytes {bytes}]

However by default a pool can use any free space available to is.
 
Okay. What would be the correct hard disk bus/device settings? I have my storage selected as the pool.

bus device.JPG
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!