A common post on the forums it seems, but my case is unique! probably not really ...
3 proxmox/ceph nodes, one just a nuc that is used for quorum purposes, no OSD's or VM's
Underlying filesystem is ZFS, so using
journal dio = 0
2 OSD's on two nodes
- 3TB Western Digital Reds
- SSD for Cache and Log
OSD Nodes: 2 * 1GB in balance-rr direct. iperf gives 1.8 GB/s
Original tests were with a ZFS log and cache on SSD.
using dd in a guest, I got seq writes of 12 MB/s
I also tried with the ceph journal on a SSD and journial dio on which did improve things, with guest writes up to 32 MB/s
Seq reads are around 80 MB/s
The same tests runs with glusterfs give much better results, sometimes by an order of magnitude.
CEPH Benchmarks
Gluster Benchmarks]
It almost seems that ceph is managing to disable the ZFS log & cach altogether.
3 proxmox/ceph nodes, one just a nuc that is used for quorum purposes, no OSD's or VM's
Underlying filesystem is ZFS, so using
journal dio = 0
2 OSD's on two nodes
- 3TB Western Digital Reds
- SSD for Cache and Log
OSD Nodes: 2 * 1GB in balance-rr direct. iperf gives 1.8 GB/s
Original tests were with a ZFS log and cache on SSD.
using dd in a guest, I got seq writes of 12 MB/s
I also tried with the ceph journal on a SSD and journial dio on which did improve things, with guest writes up to 32 MB/s
Seq reads are around 80 MB/s
The same tests runs with glusterfs give much better results, sometimes by an order of magnitude.
CEPH Benchmarks
Code:
rados -p test bench -b 4194304 60 write -t 32 -c /etc/pve/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --no-cleanup
Total time run: 63.303149
Total writes made: 709
Write size: 4194304
Bandwidth (MB/sec): 44.800
Stddev Bandwidth: 28.9649
Max bandwidth (MB/sec): 96
Min bandwidth (MB/sec): 0
Average Latency: 2.83586
Stddev Latency: 2.60019
Max latency: 11.2723
Min latency: 0.499958
rados -p test bench -b 4194304 60 seq -t 32 -c /etc/pve/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --no-cleanup
Total time run: 25.486230
Total reads made: 709
Read size: 4194304
Bandwidth (MB/sec): 111.276
Average Latency: 1.14577
Max latency: 3.61513
Min latency: 0.126247
ZFS
- SSD LOG
- SSD Cache
-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 186.231 MB/s
Sequential Write : 7.343 MB/s
Random Read 512KB : 157.589 MB/s
Random Write 512KB : 8.330 MB/s
Random Read 4KB (QD=1) : 3.934 MB/s [ 960.4 IOPS]
Random Write 4KB (QD=1) : 0.165 MB/s [ 40.4 IOPS]
Random Read 4KB (QD=32) : 23.660 MB/s [ 5776.3 IOPS]
Random Write 4KB (QD=32) : 0.328 MB/s [ 80.1 IOPS]
Test : 1000 MB [C: 38.6% (24.7/63.9 GB)] (x5)
Date : 2014/11/26 18:46:51
OS : Windows 7 Professional N SP1 [6.1 Build 7601] (x64)
ZFS
- SSD Cache (No LOG)
Ceph
- SSD Journal
-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 198.387 MB/s
Sequential Write : 23.643 MB/s
Random Read 512KB : 155.883 MB/s
Random Write 512KB : 18.940 MB/s
Random Read 4KB (QD=1) : 3.927 MB/s [ 958.7 IOPS]
Random Write 4KB (QD=1) : 0.485 MB/s [ 118.5 IOPS]
Random Read 4KB (QD=32) : 23.482 MB/s [ 5733.0 IOPS]
Random Write 4KB (QD=32) : 2.474 MB/s [ 604.0 IOPS]
Test : 1000 MB [C: 38.8% (24.8/63.9 GB)] (x5)
Date : 2014/11/26 22:16:06
OS : Windows 7 Professional N SP1 [6.1 Build 7601] (x64)
Gluster Benchmarks]
Code:
ZFS
- SSD LOG
- SSD Cache
-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 682.756 MB/s
Sequential Write : 45.236 MB/s
Random Read 512KB : 555.918 MB/s
Random Write 512KB : 44.922 MB/s
Random Read 4KB (QD=1) : 11.900 MB/s [ 2905.2 IOPS]
Random Write 4KB (QD=1) : 1.764 MB/s [ 430.6 IOPS]
Random Read 4KB (QD=32) : 26.159 MB/s [ 6386.4 IOPS]
Random Write 4KB (QD=32) : 2.915 MB/s [ 711.6 IOPS]
Test : 1000 MB [C: 38.6% (24.7/63.9 GB)] (x5)
Date : 2014/11/26 21:35:47
OS : Windows 7 Professional N SP1 [6.1 Build 7601] (x64)
ZFS
- SSD Cache (No LOG)
-----------------------------------------------------------------------
CrystalDiskMark 3.0.3 x64 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]
Sequential Read : 729.191 MB/s
Sequential Write : 53.499 MB/s
Random Read 512KB : 625.833 MB/s
Random Write 512KB : 45.738 MB/s
Random Read 4KB (QD=1) : 12.780 MB/s [ 3120.1 IOPS]
Random Write 4KB (QD=1) : 2.667 MB/s [ 651.1 IOPS]
Random Read 4KB (QD=32) : 27.777 MB/s [ 6781.4 IOPS]
Random Write 4KB (QD=32) : 3.823 MB/s [ 933.4 IOPS]
Test : 1000 MB [C: 38.6% (24.7/63.9 GB)] (x5)
Date : 2014/11/26 23:29:07
OS : Windows 7 Professional N SP1 [6.1 Build 7601] (x64)
It almost seems that ceph is managing to disable the ZFS log & cach altogether.
Last edited: