I have a server:
CPU: 2x Intel Xeon E5-2689 @ 2.6 GHz
Mobo: Supermicro X9DRi-LN4F+
RAM: 72 GB ECC @ 1333 MHz
HD: NVMe 970 EVO 256 GB, 2x 1 TB WD, 2x 500 GB SSD
GPU: AMD RX 570
OS: Proxmox
Every time I copy a big file from a drive to a different drive, iowait goes up to 80% and all VMs become unusable until the transfer is complete. My NVMe gets a maximum speed of 1.5 GB/s and 10k iops. I tried changing filesystem to EXT4/ZFS and it was the same. Until now, I didn't bother digging further as I thought it was a motherboard issue. Today, I decided to pass through my NVMe to a Windows VM directly and ran some tests. To my surprise, I'm getting 3.5 GB/s and 130k iops which is what a 970 EVO should do. What could cause this? Is there a kernel module/parameter that could cause my drives to slow down? The issue happens on my SATA SSDs too. They get low IOPS and low speeds, no matter what filesystem I use.
CPU: 2x Intel Xeon E5-2689 @ 2.6 GHz
Mobo: Supermicro X9DRi-LN4F+
RAM: 72 GB ECC @ 1333 MHz
HD: NVMe 970 EVO 256 GB, 2x 1 TB WD, 2x 500 GB SSD
GPU: AMD RX 570
OS: Proxmox
Every time I copy a big file from a drive to a different drive, iowait goes up to 80% and all VMs become unusable until the transfer is complete. My NVMe gets a maximum speed of 1.5 GB/s and 10k iops. I tried changing filesystem to EXT4/ZFS and it was the same. Until now, I didn't bother digging further as I thought it was a motherboard issue. Today, I decided to pass through my NVMe to a Windows VM directly and ran some tests. To my surprise, I'm getting 3.5 GB/s and 130k iops which is what a 970 EVO should do. What could cause this? Is there a kernel module/parameter that could cause my drives to slow down? The issue happens on my SATA SSDs too. They get low IOPS and low speeds, no matter what filesystem I use.
Last edited: