Jobs: 1 (f=1): [W(1)][100.0%][w=2230KiB/s][w=557 IOPS][eta 00m:00s] Samsung 970 EVO Plus 2TB
Jobs: 1 (f=1): [W(1)][100.0%][w=274MiB/s][w=70.0k IOPS][eta 00m:00s] Micron 7300 Pro
Hi mate, Good that you solved the issue, we are in the same boat as u were before.Coming back on this as a follow-up...
We were able to source Micron 7300 Pros 1.92TB NVMe's.
Here are the results, same fio command was used:
Code:Jobs: 1 (f=1): [W(1)][100.0%][w=2230KiB/s][w=557 IOPS][eta 00m:00s] Samsung 970 EVO Plus 2TB Jobs: 1 (f=1): [W(1)][100.0%][w=274MiB/s][w=70.0k IOPS][eta 00m:00s] Micron 7300 Pro
This 70k IOPs improvement is with just changing the drives to Micron NVMe in the 2 DC architecture?
The results are from the following fio command executed directly on the NVMe's, ceph is not involved in this test...Hi mate, Good that you solved the issue, we are in the same boat as u were before.
This 70k IOPs improvement is with just changing the drives to Micron NVMe in the 2 DC architecture?
Or have you done any other configuration changes (like in network latency etc.,)?
and what VM OS you are doing the fio testing-like linux or windows?
Regards.
fio --ioengine=libaio --filename=/dev/nvme... --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio
First of thanks for the reply from a 4 year old thread.The results are from the following fio command executed directly on the NVMe's, ceph is not involved in this test...
Code:fio --ioengine=libaio --filename=/dev/nvme... --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio
Ceph is pretty much default, there is no tuning.
Fio is executed on the proxmox host, so OS will be Debian.
If you using consumer grade NVMe's then stop and change them for proper enterprise drives. No sense in continuing with consumer drives, this is my key takeaway.
ceph osd tree
ceph osd crush add-bucket org-name root
ceph osd crush add-bucket DC1 datacenter
ceph osd crush add-bucket DC2 datacenter
ceph osd crush move DC1 root=org-name
ceph osd crush move DC2 root=org-name
ceph osd crush move pve11 datacenter=DC1
ceph osd crush move pve12 datacenter=DC1
ceph osd crush move pve13 datacenter=DC1
ceph osd crush move pve21 datacenter=DC2
ceph osd crush move pve22 datacenter=DC2
ceph osd crush move pve23 datacenter=DC2
ceph osd tree
ceph osd crush remove default
# rules
rule multi_dc_rule {
id 0
type replicated
step take org-name
step choose firstn 2 type datacenter
step chooseleaf firstn 2 type host
step emit
}
ceph osd map ceph_pool image_id
root@pve11:~# ceph osd map ceph vm-100-disk-0
osdmap e49823 pool 'ceph' (2) object 'vm-100-disk-0' -> pg 2.720ca493 (2.93) -> up ([23,12,3,11], p23) acting ([23,12,3,11], p23)
Missing a lot of detail here and I may be wrong, but would like to avoid confusion for future readers: that config is not a Ceph Stretched Cluster [1]. That config is a "simple" crush rule that uses a datacenter entity as the primary fault domain, then host.With regards to crush map, I have found the following in our docs:
Beware of this if both DC see the remote MON but MONs at each "local" DC can't reach each other.@VictorSTS, technically you are right, we are using 3rd datacenter only for MON and to maintain quorum in the event of datacenter failure.
The Ceph Stretched Cluster is something we've looked into long time ago, it was only available with Pacific and above which at the time was not yet available in Proxmox. We have not seen many people using it and have abandoned the idea. I am sure we have that somewhere on the forum.
Thanks for clarifying!
The bill compares in the same range too? Cause few people need a Lambo and from them even less needs a Lambo. Feels like Ceph and that Hammerspace thing target completely different use cases/budgets.so it's like comparing a Lambo vs a Fiat 500.
We use essential cookies to make this site work, and optional cookies to enhance your experience.