I've been tracking down an issue for a few weeks now and have come to the conclusion that it may have something to do with LVM-thin not properly trimming after a VM is deleted.
Using the same workload, you can compare LVM-thin vs ZFS vs EXT4. I'm starting to think this is an issue with the drive not being TRIM'd properly. I'm not sure why EXT4 doesn't have this issue, and maybe not Windows? I'll see if I can get a Windows OS installed on it and try to replicate this issue.
You'll notice at about the 5th or 6th run the drive drops off substantially. I'm using a 300GB file to do this test (So 5 or 6x300=1.8TB), so unless it's just a crazy coincidence, it looks like the drive isn't trimming. Which I'd imagine would result in the performance you can see below, where I'm assuming it's having to check each sector of the drive for a writable spot before writing, resulting in a 2-6x increase in write latency.
Y axis is speed in MB/s for a VM restore, X axis is the amount of times the restore has been ran sequentially. Each time the VM is deleted after a restore
This is with LVM

Now, the interesting part. I'm not sure how to run a trim on LVM without completely destroying the partitions. Maybe y'all would know more about that. Interesting part, with ZFS the issue is the same as LVM, but I'm not sure if it's because I had the drive formatted as LVM beforehand and it "poisoned" the TRIM, but I got super slow speeds on the ZFS pool, ran a manual TRIM, and bam, it's now full speed, even after writing to it 12 more times, would be over 2x the size of the drive in writes. LVM-thin will consistently tank at about the 5th or 6th 300GB write.

Another interesting point is this behavior is not replicated on EXT4, regardless if LVM was on the drive prior or not.

Hopefully that all makes sense. I feel like we're close to figuring out the issue, just not entirely sure if it's a hardware/firmware issue with the drives or if it's the file system not behaving like it should or at least isn't talking with the hardware properly.
Using the same workload, you can compare LVM-thin vs ZFS vs EXT4. I'm starting to think this is an issue with the drive not being TRIM'd properly. I'm not sure why EXT4 doesn't have this issue, and maybe not Windows? I'll see if I can get a Windows OS installed on it and try to replicate this issue.
You'll notice at about the 5th or 6th run the drive drops off substantially. I'm using a 300GB file to do this test (So 5 or 6x300=1.8TB), so unless it's just a crazy coincidence, it looks like the drive isn't trimming. Which I'd imagine would result in the performance you can see below, where I'm assuming it's having to check each sector of the drive for a writable spot before writing, resulting in a 2-6x increase in write latency.
Y axis is speed in MB/s for a VM restore, X axis is the amount of times the restore has been ran sequentially. Each time the VM is deleted after a restore
This is with LVM

Now, the interesting part. I'm not sure how to run a trim on LVM without completely destroying the partitions. Maybe y'all would know more about that. Interesting part, with ZFS the issue is the same as LVM, but I'm not sure if it's because I had the drive formatted as LVM beforehand and it "poisoned" the TRIM, but I got super slow speeds on the ZFS pool, ran a manual TRIM, and bam, it's now full speed, even after writing to it 12 more times, would be over 2x the size of the drive in writes. LVM-thin will consistently tank at about the 5th or 6th 300GB write.

Another interesting point is this behavior is not replicated on EXT4, regardless if LVM was on the drive prior or not.

Hopefully that all makes sense. I feel like we're close to figuring out the issue, just not entirely sure if it's a hardware/firmware issue with the drives or if it's the file system not behaving like it should or at least isn't talking with the hardware properly.