We would like some assistance with a performance issue we are seeing on a PVE server. We are seeing high CPU IOwait during heavier IO operations such as rsync. This is very significant in the PVE, however guests show relatively high IOwait as well. We have non-virtualized servers that we can compare with and those do not show any appreciable IOwait during the same operations.
We determined that the default disk ashift value of 9 was incorrect for a 4K block size, so we changed the ashift to 12. This did not help the IOwait problem.
We read that it is recommended to use RAID10 for disk arrays with <6 disks, so we made this change as well. This did not help the IOwait problem.
We changed the mount option in the KVM to noatime (vs. default). This seemed to help a little, but IOwait still spiked considerably.
We would appreciate assistance in tuning this server to be maximally performant, especially re: diskIO / IOwait.
Thanks.
Configuration
Hardware:
We determined that the default disk ashift value of 9 was incorrect for a 4K block size, so we changed the ashift to 12. This did not help the IOwait problem.
We read that it is recommended to use RAID10 for disk arrays with <6 disks, so we made this change as well. This did not help the IOwait problem.
We changed the mount option in the KVM to noatime (vs. default). This seemed to help a little, but IOwait still spiked considerably.
We would appreciate assistance in tuning this server to be maximally performant, especially re: diskIO / IOwait.
Thanks.
Configuration
Hardware:
- Dell PowerEdge T340
- 32GB RAM
- 12-core CPU
- No hardware RAID
- 2 x 240GB SSD for OS
- 4 x 1TB 7200rpm SATA disks for data
- Server is a combined application / database server (postgres 9.5)
- CentOS 7 KVM guest
- 500GB raw image
- 16GB RAM
- 8 of 12 CPUs
- 1.8TB disk (using LVM)
- xfs filesystem
- The problem manifests on PVE 5.2, 5.3 and 5.4
- Using zfs with LUKS encryption
- zpool create -f zfspool raidz2 /dev/disk/by-id/dm-uuid-CRYPT-LUKS1-*-luks*
- zfs set sync=disabled zfspool
- zfs set atime=off zfspool
- zfs set recordsize=4K zfspool
- zfs set primarycache=metadata zfspool
- zfs set secondarycache=metadata zfspool