VM's very slow

i'm not sure if trim is supported over ide controller.
try to move a slow vm to a new datastore on a spare ssd
what ssd model ?

Yes only i dont have a spare ssd in this server.

the model is: Samsung SSD 870 QVO 2TB

I just don't understand why it always worked until last week.. without seeing anything crazy in the performance monitor.
 
i'm not sure if trim is supported over ide controller.
try to move a slow vm to a new datastore on a spare ssd
what ssd model ?

Hi i don't have a spare ssd in this server.

the model is: Samsung SSD 870 QVO 2TB

I just don't understand why it always worked until last week.. without seeing anything crazy in the performance monitor.
 
what about space ? used / free
QVO is low end product because of TLC + isn't designed to work with Server grade.
hardware raid will disable cache of ssd, so performance are bad even with New disks.
guru's will confirm
 
what about space ? used / free
QVO is low end product because of TLC + isn't designed to work with Server grade.
hardware raid will disable cache of ssd, so performance are bad even with New disks.
guru's will confirm

33% used 1.21TB used of 4TB.

oke, but it is strange its works like a charm for 1.5 years.
 
the model is: Samsung SSD 870 QVO 2TB

QVOs use QLC-nand. This is the worst you currently could get.
Those are, in my humble opinion, not reasonable for anything. (Looking at the price difference, at least in my country, to the TLC-nand-based EVOs not even for pure cold storage.)

oke, but it is strange its works like a charm for 1.5 years.

Check (and understand/research their meaning) the SMART-values of all of them.

Recommendation anyway: Get reasonable enterprise SSDs with PLP.
 
Since turning off retbleed didn't fix your problem I would also suspect your disks might have some issues. Your network traffic also seems to have increased on 16th, according to your graphs. Is this because of the updates? Are any of your VMs behaving unusually (i.e. handling more load, writing more to disk)?

Otherwise I would also recommend the steps Neobin suggested in his posts and vet your SSDs thoroughly. Since the SSDs you are using are not enterprise grade, one of them might be acting up. Since you are using a RAID controller, even one slow SSD can drag down the whole RAID setup..
 
Hi,

Yes the network increased because i have migrate a lote of machines from servers because the performance issues.

I can't see in 1 of the machines they use more load as disk/cpu/memory or network al the same as before.

I have now moved all my vm's to another servers and if i do now a new benchmark the performance is the same as another servers.. that is strange right? That must mean that it is in 1 of the VMs or proxmox?
 

Attachments

  • benchmark2.png
    benchmark2.png
    58.6 KB · Views: 27
I have now moved all my vm's to another servers and if i do now a new benchmark the performance is the same as another servers.. that is strange right? That must mean that it is in 1 of the VMs or proxmox?
This sounds like it could be one of the VM's but you'd have to pin it down. Also, those Samsung QLC SSDs are likely to wear out as a consumer disk.

If you go to Datacentre>Node>Disks you will see the wearout percentage for each SSD if the SSDs are not obscured behind a RAID controller.

If they are behind a RAID controller such as an HP Smart Array, the controller may show you the wear out percentage.... Best bet could be to add a third node temporarily so you can do more maintenance.

Cheers,


Tmanok
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!