SSD's slow on Proxmox but not on Windows

Exodus84

Member
Jun 12, 2021
3
0
6
39
Hello, i have a weird problem i cant understand, i have a couple of cheap 1tb kingston ssds (yes i know that comes with its own problems) that i have ran in SW RAID1, ive been running my VM's of this raid pool, but what i have found out is that after about 6gb of writes, the speed goes all the way down to 8mb/s, and then ofcourse the I/O delay goes through the roof and all my vms slow down

Then i started a bit of testing by moving the vm's temprarly over to a hdd since the ssd was over 80% full, and i know that can destroy performance
So i started testing with different types of copy testing, first on the drives just when emptied with same result, then i broke up the raid and tested the ssds seperate, with still, same "ish" results (no longer rais, so would change a bit), created the raid again (that took almost 24h to rebuild), tested again, same results
Then i got a buddy of mine that also runs proxmox to test on his kingston drives, only difference is that he has 2x 240gb drives, and he got the same results, both of us are running Proxmox 7 with the latest updates that was at the time, dont remember exact kernel, but around 01.03.2022)
So i thought the drives must just be THAT bad, but today i tested them on a seperate windows machine, and to my suprice, both in standalone and in windows 11 SW raid, they both performed EXELENT, even with 40gb file transfer
So then my question is, how can it be a difference of 450mb/s transfer speeds after 6gb copied between Proxmox and Windows 11?
And what els can i try to get them to work properly?
 

Attachments

  • linuxcopytest.png
    linuxcopytest.png
    11.9 KB · Views: 11
  • linuxddcopytest.png
    linuxddcopytest.png
    115.9 KB · Views: 12
  • linuxddcopytest1.png
    linuxddcopytest1.png
    64.9 KB · Views: 10
  • windowscopy.png
    windowscopy.png
    16.4 KB · Views: 12
  • windowscrystaldiskmark.png
    windowscrystaldiskmark.png
    31.7 KB · Views: 14
What do you mean exactly by SW raid? ZFS or your onboard raid? You also didn'T told us what model of SSD you are using. If its a QLC SSD it would be no wodner why the performance drops after some writes as soon as the RAM cache and SLC cache gets full.
 
ive set them up in raid1 with mdadm and formated with EXT4, its the cheapest kingston with the QLC and bad cache, yes, Kingston A400 960GB, so i know about the performance issues with these, thats why i was so shocked when they worked fine with windows, does windows "help" them in any way?
My plan is to buy better ones, but cant afford it right now, i dont need enterprice performance, but better than 8mb/s :p
 
Are the SSDs filled to the same level when doing benchmarks? The fuller the SSD gets, the smaller your SLC cache will be and the sooner you will see the terrible real QLC speeds.

The 8MB/s isn't far away from what other people see in benchmarks:
Because of my discovery that the drives’ behaviour is very data dependent and state dependent, I decided to try and fill the drive with random data to see what its actual full-surface write speed is. Unfortunately, this process took over 13 hours, averaging just 17.4MB/s. This is a very un-SSD-like result that could easily be bested by many hard drives and even USB 2.0 flash drives. This makes the drive a very poor choice if you intend to fill it up and this kind of behaviour is likely the cause of poor performance reported by users. Even reading actual data quickly is a challenge for the drive, making this a rather big disappointment.
And with PVE you get virtualization overhead, nested filesystems/storages and so on so a dropdown from a raw 17.3 MB/s to 8MB/s wouldn'T be unusual.

With SSDs its basically "buy cheap and you buy twice". For the same price you could have bought a pair second hand enterprise TLC/MLC SSDs performing way better.
 
Last edited:
yeah ive learned that now :P

But still, why doesnt this problem show in windows? ive tried them both completely empty and full in pve with the same results, but only empty in windows, but there it never slows down
 
Then fill it up to 900 GiB and then start CrystalDiskMark, set it to 5x 16GiB and start the "SEQ1M Q1T1" benchmark. If the SSDs is slow when writing to QLC NAND you should see bad performance there too.

SSDs doesn't got dedicated SLC flash cells for caching. They will use the normal QLC flash cells and write to them in SLC mode which is way faster but wastes alot of space. The fuller your SSD becomes, the less empty QLC cells you got that could be used in SLC mode for SLC caching. As long as there is space for SLC caching performance wil look fine. You just see the terrible real QLC performance as soon as the caches are full.
 
Jup, if you want to see how slow your SSD can get in worst case use something like this:
fio.exe --ioengine=windowsaio --filename=C\:\\benchmark.fio --direct=1 --sync=1 --rw=randwrite --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=sync_random_write_iops
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!