So I've been playing around in my lab to get ceph going on a 5 node cluster backed by a 10g network.
I've setup 3 nodes to test ... each with one nvme and one hdd
I've created two crushmaps, the default one and another targeting the ssd/nvme drives
I've setup the corresponding pools and created 4 vms:
VM1: ceph ssd pool
VM2: ceph hdd pool
VM3: local zfs pool
VM4: zfs 5vdev hdd pool
I tested the speeds in the vm (vms are all using virtio) using fio:
fio --directory=/ --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio --size=1G --numjobs=1
I'm getting massive swings in results between the ceph and zfs setups, with zfs on average 10x faster than ceph
ceph nvme: 110 IOPS
ceph hdd: 10 IOPS
zfs nvme: 890 IOPS
zfs hdd pool: 47 IOPS
I'm using consumer grade parts across all test scenarios
My question is ... why is there such a massive discrepancy between ceph and zfs ... and is this expected? Is there something I am doing wrong here?
I've setup 3 nodes to test ... each with one nvme and one hdd
I've created two crushmaps, the default one and another targeting the ssd/nvme drives
I've setup the corresponding pools and created 4 vms:
VM1: ceph ssd pool
VM2: ceph hdd pool
VM3: local zfs pool
VM4: zfs 5vdev hdd pool
I tested the speeds in the vm (vms are all using virtio) using fio:
fio --directory=/ --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio --size=1G --numjobs=1
I'm getting massive swings in results between the ceph and zfs setups, with zfs on average 10x faster than ceph
ceph nvme: 110 IOPS
ceph hdd: 10 IOPS
zfs nvme: 890 IOPS
zfs hdd pool: 47 IOPS
I'm using consumer grade parts across all test scenarios
My question is ... why is there such a massive discrepancy between ceph and zfs ... and is this expected? Is there something I am doing wrong here?