Hello all,
I'm testing proxmox with ceph and created the following hardware configuration:
- 2 * Intel Journal SSD
- 11 * 6TB Seagate OSD disks + 1 * 2TB WD disk
2 * replication - 1024 PG on a test pool - ceph health OK
I have following read and write speeds according to rados bench:
Write bench:
Total time run: 100.174462
Total writes made: 6553
Write size: 4194304
Bandwidth (MB/sec): 261.663
Stddev Bandwidth: 49.1988
Max bandwidth (MB/sec): 356
Min bandwidth (MB/sec): 0
Average Latency: 0.244563
Stddev Latency: 0.21535
Max latency: 2.51924
Min latency: 0.015474
Read bench:
Total time run: 100.170912
Total reads made: 24852
Read size: 4194304
Bandwidth (MB/sec): 992.384
Average Latency: 0.06447
Max latency: 0.766223
Min latency: 0.005859
Rados read and write seems to be ok.
When launching a VM machine (latest debian distro) with a ceph disk, (RBD, virtio disk, deadline on VM disk, new xfs/ext4 or even btrfs partition) I have the following results:
Write test:
dd if=/dev/zero of=cephtest bs=16k count=1M
1048576+0 records in
1048576+0 records out
17179869184 bytes (17 GB) copied, 83.2059 s, 206 MB/s
seems reasonable -> disk io (confirmed with iostat on ceph server(s)) goes to 90%
Read test:
dd if=cephtest of=/dev/null bs=16k count=1M
1048576+0 records in
1048576+0 records out
17179869184 bytes (17 GB) copied, 168.887 s, 102 MB/s
Not ok -> disk io (confirmed with iostat on ceph server(s)) goes to 20/30 %
Anybody has a reason why the difference is that big ?
When I do a iostat in the local machine with normal behavior I see the following:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vdb 3.00 0.00 214.00 5.00 13516.00 1364.00 135.89 154.96 28.55 9.40 848.00 4.57 100.00%
vdb 2.00 0.00 186.00 7.00 13396.00 3552.00 175.63 149.38 80.58 10.22 1950.29 5.18 100.00%
I'm testing proxmox with ceph and created the following hardware configuration:
- 2 * Intel Journal SSD
- 11 * 6TB Seagate OSD disks + 1 * 2TB WD disk
2 * replication - 1024 PG on a test pool - ceph health OK
I have following read and write speeds according to rados bench:
Write bench:
Total time run: 100.174462
Total writes made: 6553
Write size: 4194304
Bandwidth (MB/sec): 261.663
Stddev Bandwidth: 49.1988
Max bandwidth (MB/sec): 356
Min bandwidth (MB/sec): 0
Average Latency: 0.244563
Stddev Latency: 0.21535
Max latency: 2.51924
Min latency: 0.015474
Read bench:
Total time run: 100.170912
Total reads made: 24852
Read size: 4194304
Bandwidth (MB/sec): 992.384
Average Latency: 0.06447
Max latency: 0.766223
Min latency: 0.005859
Rados read and write seems to be ok.
When launching a VM machine (latest debian distro) with a ceph disk, (RBD, virtio disk, deadline on VM disk, new xfs/ext4 or even btrfs partition) I have the following results:
Write test:
dd if=/dev/zero of=cephtest bs=16k count=1M
1048576+0 records in
1048576+0 records out
17179869184 bytes (17 GB) copied, 83.2059 s, 206 MB/s
seems reasonable -> disk io (confirmed with iostat on ceph server(s)) goes to 90%
Read test:
dd if=cephtest of=/dev/null bs=16k count=1M
1048576+0 records in
1048576+0 records out
17179869184 bytes (17 GB) copied, 168.887 s, 102 MB/s
Not ok -> disk io (confirmed with iostat on ceph server(s)) goes to 20/30 %
Anybody has a reason why the difference is that big ?
When I do a iostat in the local machine with normal behavior I see the following:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vdb 3.00 0.00 214.00 5.00 13516.00 1364.00 135.89 154.96 28.55 9.40 848.00 4.57 100.00%
vdb 2.00 0.00 186.00 7.00 13396.00 3552.00 175.63 149.38 80.58 10.22 1950.29 5.18 100.00%