Hello !
I have this setup of proxmox and ceph :
I expected issues on my Vms that are on the ssdpool (and hdd pool is for archive non speed related rbd).
When I performed a benchmark, I was really disapointed :
whereas with the hdd pool (so spinning disks only) :
Could you help me on finding the issue and resolving it ?
Thanks
I have this setup of proxmox and ceph :
Code:
root@proxmox1:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 18.19080 root default
-7 0.90970 host proxmox1
5 ssd 0.90970 osd.5 up 1.00000 1.00000
-3 5.45695 host proxmox2
0 hdd 1.81898 osd.0 up 1.00000 1.00000
1 hdd 1.81898 osd.1 up 1.00000 1.00000
2 hdd 1.81898 osd.2 up 1.00000 1.00000
-16 8.18535 host proxmox3
3 hdd 1.81940 osd.3 up 1.00000 1.00000
10 hdd 1.81898 osd.10 up 1.00000 1.00000
11 hdd 1.81898 osd.11 up 1.00000 1.00000
12 hdd 1.81898 osd.12 up 1.00000 1.00000
9 ssd 0.90900 osd.9 up 1.00000 1.00000
-13 0.90970 host proxmox4
4 ssd 0.90970 osd.4 up 1.00000 1.00000
-10 2.72910 host proxmox5
6 hdd 1.81940 osd.6 up 1.00000 1.00000
7 ssd 0.90970 osd.7 up 1.00000 1.00000
I expected issues on my Vms that are on the ssdpool (and hdd pool is for archive non speed related rbd).
When I performed a benchmark, I was really disapointed :
Code:
root@proxmox1:~# rados bench 60 write -p ssdpool
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0 objects
Object prefix: benchmark_data_proxmox1_3691
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 16 0 0 0 - 0
2 16 23 7 13.9988 14 0.440324 1.1027
3 16 23 7 9.3325 0 - 1.1027
4 16 34 18 17.9984 22 2.5665 2.60108
5 16 42 26 20.7981 32 0.246745 1.96686
6 16 44 28 18.6649 8 4.71746 2.16382
7 16 46 30 17.1412 8 0.223545 2.03352
8 16 46 30 14.9984 0 - 2.03352
9 16 46 30 13.3317 0 - 2.03352
10 16 46 30 11.9985 0 - 2.03352
11 16 46 30 10.9077 0 - 2.03352
12 16 46 30 9.99878 0 - 2.03352
13 16 46 30 9.22959 0 - 2.03352
14 16 46 30 8.57028 0 - 2.03352
15 16 46 30 7.99895 0 - 2.03352
16 16 46 30 7.49901 0 - 2.03352
17 16 46 30 7.05791 0 - 2.03352
whereas with the hdd pool (so spinning disks only) :
Code:
root@proxmox1:~# rados bench 60 write -p hddpool
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 60 seconds or 0 objects
Object prefix: benchmark_data_proxmox1_3076
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
0 0 0 0 0 0 - 0
1 16 28 12 47.999 48 0.940427 0.739306
2 16 51 35 69.9969 92 0.722352 0.742403
3 16 71 55 73.3288 80 0.489085 0.680408
4 16 93 77 76.9947 88 0.325764 0.600826
5 16 103 87 69.5933 40 0.277885 0.565222
6 16 104 88 58.661 4 0.180992 0.560855
7 16 104 88 50.2806 0 - 0.560855
8 16 129 113 56.4942 50 0.821083 1.06378
9 16 153 137 60.8822 96 0.45285 0.99641
10 16 171 155 61.9932 72 0.420413 0.936436
11 16 188 172 62.5386 68 0.321447 0.876192
12 16 202 186 61.9933 56 3.64751 0.889077
13 16 225 209 64.3008 92 3.92119 0.937779
14 16 247 231 65.9925 88 0.400445 0.936914
15 16 273 257 68.5256 104 0.405007 0.913546
16 16 296 280 69.9923 92 0.477598 0.889124
17 16 318 302 71.0511 88 1.10232 0.881567
18 16 336 320 71.1034 72 0.520223 0.862996
Could you help me on finding the issue and resolving it ?
Thanks