@zedicus : i just tested a standard grade samsung NVMe M2 HDD and it gives 270 fyncs per second and as it is nearly 60% full even iops are far from impressive.
The here is a fio result :
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k...
You are using ZFS. Best results we had with ZFS are obtained with simple SSD + HBA.
May i ask you why you based your perf test on fsync() perfs ? If you dont fsync() what results do you have with fio for an example.
If you are planning to host MySQL DBs and you will have to rely on fsync()...
If you have 8 disks and uses raid 10 (1+0) you will have 4 disks per sub array netting you a 4TB data. (2+2) raid 1 and (2+2) raid striped so 4 TB usable.
If you chose RAID 1 on a 8 HDD hardware raid or a SAN : aggregate disk distribution would probably be (1+1) (1+1)(1+1)(1+1)
so 4TB also.
@Alwin here are the raw perfs of the 64xSAS pool
Destroyed the pools, created only an HDD pool
Issued a lot of write threads :
rados bench -p testsas 180 write -b 4M -t 1024 --no-cleanup
2018-07-06 14:51:45.695910 min lat: 3.33414 max lat: 4.06629 avg lat: 3.67097
sec Cur ops started...
Hello @kaltsi, it's a tough one.
Slow requests can be caused by network issues, disks issues, or event controller issues as stated by @Alwin.
Looks like you have a 12x7200RPMs SATA drives. Do you use filestore or bluestore ?
In lab we tested filestore with 12 SATA drives too and it was not...
Yes @Alwin you are right,
We will need to tweak this to get a more 'real life' scenario.
In fact it is written 'cache' in Ceph's documentation but it looks more like a tiering system.
By default it seems that there is no dirty object evictions until the cache pool is full. So eventually...
So here are the results :
I/Os are issued on SSD pools :
rados bench -p cache 60 write -b 4M -t 16
sec Cur ops started finished avg MB/s cur MB/s last lat(s) avg lat(s)
60 16 16251 16235 1082.18 1084 0.0415562 0.0590952
Total time run: 60.048231
Total...
I did it like this :
one pool with all the SSD one pool with all the HDD
Then assign the SSD pool named cache to the HDD pool named data
ceph osd tier add data cache
Assign cache policy :
ceph osd tier cache-mode cache writeback
To issue I/O from the SSD pool :
ceph osd tier set-overlay...
Hi guys,
Here's a short summary of some tests led in our lab.
Hyperconverged setup.
Server platform is 4x :
Lenovo SR650
2x Intel Silver 4114 10 cores
256 GB RAM @2666Mhz
1X Embedded 2x10Gbps Base-T LOM (x722 Intel) #CEPH
1X PCI-E 2x10Gbps Base-T adpater (x550 Intel) #VMBR0
For each...
Ok get it : Client is also a VM => Server is a Win 10 VM : RDP from client to Win 10 is a disaster.
I just fired up a WIN10 VM , stock, no updates, no drivers... runs mostly smooth on 2560x1600 connecting from my laptop to the guest.
I guess transferring to the hypervisor is not quite the same as loading the guest. As RDP is very sensitive to network conditions variations, i suppose clearing out that client ==> VM is ok might help.