I have a k=4 m=2 erasure code pool on a single host with 6 - 6 tb SAS drives in a Dell R730. I have a NVME drive (Intel DC P4510) split up into DB and WAL for each OSD. I am using this pool with CephFS. Using iostat, I am not seeing any reads going to the NVME drive even when running fio benchmarks. I am seeing writes to the NVME, just no reads. Is this the expected behavior with erasure code pools?
lvm-volume lvm list shows for each OSD:
lvm-volume lvm list shows for each OSD:
[block] /dev/ceph-724f1e31-6a08-4643-a54d-f37d12766ff3/osd-block-9376a70d-ccf4-439a-b38e-2c9ed14ac771
block device /dev/ceph-724f1e31-6a08-4643-a54d-f37d12766ff3/osd-block-9376a70d-ccf4-439a-b38e-2c9ed14ac771
block uuid FrLATf-TUed-DWUL-C4CC-kscp-2jdE-HAIAoK
cephx lockbox secret
cluster fsid 66adcd3d-b086-4186-a577-e628abd1e899
cluster name ceph
crush device class hdd
db device /dev/cache/osd9-db
db uuid udiu7E-9Tp0-Mb91-rQ4A-fBY9-O0Mk-fHf7u1
encrypted 0
osd fsid 9376a70d-ccf4-439a-b38e-2c9ed14ac771
osd id 9
osdspec affinity
type block
vdo 0
wal device /dev/cache/osd9-wal
wal uuid 9uzxhC-7DJD-rSdx-sVFf-NIPN-DnYF-IkXzY1
with tpm 0
devices /dev/sdd