Poor SSD performance in

Apr 27, 2019
5
0
6
53
If I do any kind of heavy writes (1GB+) in any of my Linux guests it eventually causes the load to spike and suddenly all the other guests and proxmox itself grinds to a halt.

I'm using ZFS with mirrored 120GB SSDs. I have another ZFS pair of 240GB SSD drives that I migrated a few of the guests to just to see if that changed anything and it didn't seem to. The SSDs are el-cheapo from MicroCenter (think $20 and $40/ea).

I don't see similar behavior when logged into proxmox itself, so I'm guessing either something is misconfigured or something else is going on.
 
If I do any kind of heavy writes (1GB+) in any of my Linux guests it eventually causes the load to spike and suddenly all the other guests and proxmox itself grinds to a halt.

I'm using ZFS with mirrored 120GB SSDs. I have another ZFS pair of 240GB SSD drives that I migrated a few of the guests to just to see if that changed anything and it didn't seem to. The SSDs are el-cheapo from MicroCenter (think $20 and $40/ea).

I don't see similar behavior when logged into proxmox itself, so I'm guessing either something is misconfigured or something else is going on.

As tom said, cheap SSD's will have a small bit fast NAND as a cache layer and then the rest will be cheap NAND, hence you're facing the issue when writing a large file over 1GB you're filling the faster layer and moving into the slow area and getting large I/O wait.
 
As tom said, cheap SSD's will have a small bit fast NAND as a cache layer and then the rest will be cheap NAND, hence you're facing the issue when writing a large file over 1GB you're filling the faster layer and moving into the slow area and getting large I/O wait.

Would some DC S3700s be better? I don't need a lot of space (200GB is more than enough).
 
Ok, did a bit of testing and was able to create an LVM-over iSCSI to my FreeNAS box and moved one of my guests over. Sequential writes increased by a factor of 5 and I/O load never crossed 20%. So enterprise SSDs or iSCSI it is.