I'm testing some stuff with Crucial T700 disks I had laying around, before ordering some better (i.e. enterprise/datacenter grade). I know there is a known gotcha with consumer SSDs and how ZFS in usages like Proxmox will burn through the TBW.
Setting that aside, I was always (naively?) expecting that NVMes, with their crazy sequential speeds that never really apply in real-world scenarios, would be fit-for-purpose for something like creating a backup of a VM. However, whenever I create a backup of a VM, I can't hit more than 500 MB/s in read and write speeds. I also tried with a second set of disks, so that the reads came from one set of mirrors and the writes went to another - and the same 500 MB/s steady-state happened.
What's the gotcha that I am missing? Is it due to ZFS itself? There's more than enough CPU and RAM available when doing the backups, and I set the ionice priority to 0, to no avail.
Setting that aside, I was always (naively?) expecting that NVMes, with their crazy sequential speeds that never really apply in real-world scenarios, would be fit-for-purpose for something like creating a backup of a VM. However, whenever I create a backup of a VM, I can't hit more than 500 MB/s in read and write speeds. I also tried with a second set of disks, so that the reads came from one set of mirrors and the writes went to another - and the same 500 MB/s steady-state happened.
What's the gotcha that I am missing? Is it due to ZFS itself? There's more than enough CPU and RAM available when doing the backups, and I set the ionice priority to 0, to no avail.