ceph kvm file system type. ext4 or xfs?

RobFantini

Famous Member
May 24, 2012
2,042
110
133
Boston,Mass
Hello

from what I read at http://forum.proxmox.com/threads/18552-ceph-performance-and-latency , It seems that for within a kvm it is better to use ext4 .

all our ceph kvm's use xfs for data storage disks... the ceph node kvm's are not as fast for some current disk io. ceph nodes show greater i/o delays at pve> summary ..


So the question is - for now does it make sense to use ext4 as default for kvm disks hosted on ceph?
 
I have seen a steady drop in performance with XFS in the kernels used since pve-3.1. As a consequence I have migrated all my KVM's to ext4. iops is raised with a power of 4 and raw throughput has increased with a power of 2.
 
I have seen a steady drop in performance with XFS in the kernels used since pve-3.1. As a consequence I have migrated all my KVM's to ext4. iops is raised with a power of 4 and raw throughput has increased with a power of 2.

thank you, I'll switch to ext4..
 
It depends of a lot of things but generally I am not able to make any benchmarks showing more than marginal difference between writeback and nocache. My storage pool is ZFS based using the native ZFS plugin (iscsi) or Qcow2 over the built-in NFS support in ZFS. A benefit of using nocache is higher security for writes since writes are parsed directly through to the storage pool. Reads are still cached in the VM's file system. My guess is that you will see the same using Ceph.
 
To make some benchmarks you can use this config with fio (run fio within the VM)

Code:
# This job file tries to mimic the Intel IOMeter File Server Access Pattern
[global]
description=Emulation of Intel IOmeter File Server Access Pattern


[iometer]
bssplit=512/10:1k/5:2k/5:4k/60:8k/2:16k/4:32k/4:64k/10
rw=randrw
rwmixread=80
direct=1
size=4g
ioengine=libaio
# IOMeter defines the server loads as the following:
# iodepth=1    Linear
# iodepth=4    Very Light
# iodepth=8    Light
# iodepth=64    Moderate
# iodepth=256    Heavy
iodepth=64
 
Last edited:
another related question:

for kvm on ceph, currently what is the best ' Cache' setting for disks? we're using 'Write back' .

This applies to VMs on any storage, not just ceph. Writeback certainly has higher performance but can cause some issues such as data corruption if VMs are shut down abruptly.

If my VMs mission critical, i use write through cache. Slowest performance but no data corruption. Whether the VM disk is on Ceph or NFS or LVM, caching mechanic is same.
 
This applies to VMs on any storage, not just ceph. Writeback certainly has higher performance but can cause some issues such as data corruption if VMs are shut down abruptly.

If my VMs mission critical, i use write through cache. Slowest performance but no data corruption. Whether the VM disk is on Ceph or NFS or LVM, caching mechanic is same.

thank you for the information on ' write through cache' . we'll certainly use that for data systems.