How to setup storage with SSD and HDD

osti

New Member
May 3, 2022
12
2
3
Hello at all,

I installed a Proxmox cluster with 3 cisco server nodes.
Each node have 6 SSD's ( 2 x 240GB, 4 x 1TB) and 12 HDD with 2TB each.

The first 2 SSDs are used as ZFS raid0 for the system itself.

Now I have 4 SSDs(4 x 1TB) and 12HDDs(12x 2TB) left.
My idea ist to use 2 of the HDDs in another ZFS raid0 to store ISO images.
With the other 4SSDs(4TB) + 10HDDs(20TB) I would like to setup a Ceph cache tier.
Is ist possible to use Ceph cache tiering with Proxmox VE or is there a "better/easy" way to use the actual disks setup?

Thank you so much.
 
The first 2 SSDs are used as ZFS raid0 for the system itself.
Why? PVE system disk neither needs that much performance nor that much space. So only thing that would make sense would be Raid1 for redundancy.
 
I know it is a little bit overdone but we dont want to change it for the moment (in case we would change it we just have one more SSD with 240GB).
Do you have any suggestion regarding the othe SSD/HDD

Thanks.
 
Do you have any suggestion regarding the othe SSD/HDD
Use CEPH. This should be cluster worthy of its name, therefore you need some kind of shared storage and the distributed shared storage CEPH offers seems the best fit.

I know it is a little bit overdone but we dont want to change it for the moment (in case we would change it we just have one more SSD with 240GB).
You won't have another disk with 240GB, you would use a RAID1 with 240 GB in which any one of the two SSDs can fail without any service interruption.
 
  • Like
Reactions: Dunuin
Use CEPH. This should be cluster worthy of its name
(Oh sorry - I have to correct: We do have a ZFS raid1 (mirror) for the first 2 SSDs.)

Hmm, I know that I can create different device classes in ceph (SSD and HDD e.g.) but comparing to use the faster OSDs as a journal device or have a cache tier (with hot and cold storage) - will it be the "same"?
 
Last edited:
I would like to ask again: If I understand right I have the option to create 2 crush rules, one for SSD and one for HDDs.
After that, I can create 2 pools: One with my SSDs and another one with the HDDs.

Is there another way to use SSDs and HDDs in one ceph pool, where the SSDs do some kind of caching?
I found this documentation about bcache and this Adding caching tier to your filesystem on github.
Is itpossible to use it with proxmox?

Thank you.
 
I would like to ask again: If I understand right I have the option to create 2 crush rules, one for SSD and one for HDDs.
After that, I can create 2 pools: One with my SSDs and another one with the HDDs.

Is there another way to use SSDs and HDDs in one ceph pool, where the SSDs do some kind of caching?
I found this documentation about bcache and this Adding caching tier to your filesystem on github.
Is itpossible to use it with proxmox?

Thank you.

You may be looking for Cache Tiering. Further information:https://docs.ceph.com/en/quincy/rados/operations/cache-tiering/
But, it deprecated. Further information: https://access.redhat.com/documenta...0/html/release_notes/deprecated_functionality
I have the same problem as you.
Now I found DB/WAL inBlueStore. Further information: https://docs.ceph.com/en/quincy/rados/configuration/bluestore-config-ref/
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!