Mixed SSD / HDD CEPH pool

blastmun

New Member
Dec 27, 2023
23
1
3
Hi,

I am coming to you regarding the feasibility of an installation.
I currently have 2 node pve at home and a third is coming very soon.
my node1 is mainly used to run different personal cloud VMs, docker, home automation....
config:
MSI B550 Tomahawk
Rzen 5 5600G
32GB ram

my node2 is used exclusively for the moment to run an opnsense vm
NuC intel N5105, 16Gb ram

My upcoming node3 will be used for the replication and HA of my cloud storage part and will be located with a member of my family who also has a fiber subscription.
Core i5 6500
16GB RAM

My Storage part is currently on a machine similar to my node1 with Truenas Scale. It is equipped with a pool raidz1 3* 4TB HDD + 1HDD 6T solo.
For different reasons, I want to group everything together on proxmox with the aim of doing HA with Ceph storage.

My question is the following if I create a pool consisting of its 3 OSDs
node1 a 4T SSD osd
node2 an osd HDD 4T
node3 an osd HDD 4T

And put his 3
Knowing that my VMs will be permanently active on node1, is it possible to benefit from the performance of the SSD or will putting the SSD/HDD in the pool crash the performance?
 
Thank you for the answer.
What I would have liked is, on my pve1 to have storage of its data on my 4TB SSD to gain responsiveness, but to be able to back it up on my other nodes on HDDs.
 
I understand your plan. It is not possible with Ceph. Crph does synchronous writes to all copies before the client gets the acknowledgement. Mixing SSDs with HDDs slows down the SSDs. Besides that your intended cluster is too small to do anything useful.
 
Perhaps my reasoning will not please you, but it must be remembered that my use is personal with a few dozen people in my family. But to schematize.

PVE1
(VM 100 Nexcloud)
CONfig: OS on a local NVMe
DATA: 4TB SSD
(VM 101 ProxmoxBackupSrv)
OS: nvme local
DATA: Ceph 4TB
I create my Ceph pool with a RAIDZ on PVE1 and Simple HDDs on PVE2 and PVE3.
On PVE1 I create my VM 100 Nextcloud with the OS stored on a local 500GB nvme and I add storage to it on my 4TB SSD. I create a second VM 101 with a PBS on it. This VM also the OS will be on my local NVMe and a second "storage" space on the Ceph.
I create a cluster with my 3 PVEs and I put HA in the PBS VM so that if PVE1 falls for X reason another node takes over.
In this case my Nextcloud VM in normal operation will fully use the performance of my SSD.
I create a backup task at least every 30 minutes and as it is incremental, only the added files will be backed up.
Which allows me to maintain the advantages of the SSD and to save the data from the SSD every 30 min/1 hour While not constantly requesting the 3 or 5 (if one raidz1) HDD from the ceph. Knowing that my use is purely personal, photo documents etc., the probability of losing "crucial" data is practically zero. Knowing that in general the most sensitive data remains in parallel locally on my PC.

What do you think?
 
No, I'm talking about Ceph, I haven't deployed one yet so I don't know all the possible variants. But in idea it might have interested me if one of the OSDs was in "raid". I am doing a lab under virtualbox to see the possible settings and in fact it does not seem possible to import an already existing LVM/DIR or ZFS. It is only possible to import a disk.
On the other hand, in the creation of Ceph OSD, I have difficulty understanding the DB/WAL disk? Because they say it can speed up performance. But nothing says how big and how large the disks should be in relation to the storage.

Otherwise let's stay on the 4TB SSD on the PVE1 with an incremental backup managed by pbs on the cephs pool composed of pve1 (HDD 4T) pve2 (HDD 4T) and pve3 (HDD 4T).
What do you think ?