Proxmox 7.1-6 poor performance

gpistotnik

Member
Nov 30, 2021
11
0
6
24
Hello!
I have proxmox installed on DELL 730xd, with 2x Intel Xeon E5-2650 v3 and 512 gb ddr4 memory. I am looking forward to replace esxi with proxmox in the future but I need help with performance issues. The performance of Windows Server guest is very poor, and also LXC are not as quick as they should be. I had esxi installed here and it worked well, now I installed proxmox and got huge perfomance drop. What have I done wrong?
 
Please provide some information about your storages.
RAID Controller/Disks (including Model)

Output of pvesm status as well as the storage config cat /etc/pve/storage.cfg.
In addition please also provide the config of one Windows VM that performs badly with qm config <VMID>, and the config of a container with pct config <CTID>.
 
I am using for now Samsung QVO 1 tb SSD, 2 of them in ZFS mirror. I will better ssds, when I will use proxmox for real, for now it's just testing.

Screenshot 2021-11-30 at 12.19.00.png

Used is QVO only, two samsung SSDs, 1 tb, the others are for migration from esxi.

Windows config
Screenshot 2021-11-30 at 12.20.01.png
 
Consumer SSDs, and especially QLC like the QVO, are really terrible as a ZFS storage. So don't wonder about bad storage performance as long as you use them.

Otherwise your VM config looks fine.
 
Last edited:
If possible, could you run a fio test on that pool?

Code:
fio --ioengine=psync --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=600 --time_based --name write_4k --size 60G --filename=/QVO/fio_benchmark
fio --ioengine=psync --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=600 --time_based --name read_4k --size 60G --filename=/QVO/fio_benchmark
fio --ioengine=psync --direct=1 --sync=1 --rw=write --bs=4M --numjobs=1 --iodepth=1 --runtime=600 --time_based --name write_4m --size 60G --filename=/QVO/fio_benchmark
fio --ioengine=psync --direct=1 --sync=1 --rw=read --bs=4M --numjobs=1 --iodepth=1 --runtime=600 --time_based --name read_4m --size 60G --filename=/QVO/fio_benchmark
 
Consumer SSDs, and especially QLC like the QVO, are really terrible as a ZFS storage. So don't wonder about bad storage performance as long as you use them.

Otherwise your VM config looks fine.
So if I test this proxmox with QVO as LVM or Directory it should work better?
 
If possible, could you run a fio test on that pool?

Code:
fio --ioengine=psync --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=600 --time_based --name write_4k --size 60G --filename=/QVO/fio_benchmark
fio --ioengine=psync --direct=1 --sync=1 --rw=read --bs=4K --numjobs=1 --iodepth=1 --runtime=600 --time_based --name read_4k --size 60G --filename=/QVO/fio_benchmark
fio --ioengine=psync --direct=1 --sync=1 --rw=write --bs=4M --numjobs=1 --iodepth=1 --runtime=600 --time_based --name write_4m --size 60G --filename=/QVO/fio_benchmark
fio --ioengine=psync --direct=1 --sync=1 --rw=read --bs=4M --numjobs=1 --iodepth=1 --runtime=600 --time_based --name read_4m --size 60G --filename=/QVO/fio_benchmark
It is stuck at 1st one almost 10 hours for now, I will replace disks and then try again. Thanks for help so far.
 
It is stuck at 1st one almost 10 hours for now, I will replace disks and then try again. Thanks for help so far.
Then something is really wrong because the fio commend should end after 10 minutes even if the 60GB aren't read/written yet.
 
Hello!
I swapped the ssd for nvme one and it works perfectly.
Now i have one question.
Is this ssd good?
SAMSUNG PM1735 Enterprise SSD 3.2 TB internal HHHL card PCIe 4.0 x8 NVMe OEM
For mirror ZFS (need backup)
Or please recommend me one ssd that is working good with proxmox

Thanks
 
Looks fine but keep in mind that a mirror never replaces a real backup and that the SSD is OEM so no warranty for consumers.
 
Hello!

Me again.

I am extending my SSD storage and I would like to use M.2 storage, 4 of them in mirror/stripe or raidz1.
I am thinking of
- Intel SSD 670p NVMe
- Samsung 980 SSD
- Samsung 970 EVO Plus
- Crucial P2 SSD

Are they good enough for zfs?

Thanks
 
Last edited:
The 670p is QLC so you really want to avoid that one.
And all 3 are consumer SSDs, so not really recommended for ZFS because of missing powerloss protection and low life expectation. See the Proxmox ZFS benchmark for enterprise vs consumer performance comparisons: https://www.proxmox.com/de/downloads/item/proxmox-ve-zfs-benchmark-2020
Better would be something like a Micron 7300 Pro / Micron 7400 Pro / Samsung PM 983 / Samsung PM 9A3 (all for read intense workloads) or even better a Micron 7300 MAX (for mixed workloads). You don't have that much choices because M.2 was designed for laptops and servers got way better suited from factors like for example U.2 if NVMe is needed. So most enterprise SSDs will use U.2/U.3 and so on and not M.2. The M.2 footprint is just too small to be useful in a big server. Not enough space for spare NAND chips, not enough space for higher grade (MLC/SLC) NAND chips, not enough space for additional RAM cache, not much space for capacitators for the powerloss protections (basically buildin backup battery), not that much surface for cooling, ...

You can try it with consumer SSDs, maybe thats enough for your workload, but often it is "buy cheap and you buy twice" with SSDs.
 
Last edited:
Thanks for reply. I will try with crucial one, it is TLC. I don+t have the option to plug in U.2/U.3. I know that it is not as good as real enterprise disks, but it will be enough for this use case.
 
Thanks for reply. I will try with crucial one, it is TLC. I don+t have the option to plug in U.2/U.3. I know that it is not as good as real enterprise disks, but it will be enough for this use case.
Only the old ones. The new Crucial P2 SSDs use QLC too and both got the same part number so you don'T know what you get. So you pay the same for both but the TLC one writes wit 450MB/s after the SLC cache gets full while the QLC one will drop down to 40MB/s writes. There should be some law to prevent such practices...
 
Last edited:
Looks like the WD Blue SN550 and SN570 are both TLC. But make sure you got a UPS and that you only use async writes. The 1TB models just got 0.329 DWPD like most other consumer SSDs, so you run very quick out of warranty with server workloads.
 
Last edited:
The 1TB SN570 is advertised with up to 3000 MB/s write performance and 600 TB TBW. So it can write with 3 GB/s and you are only allowed to write 600,000 GB over 5 years. That means if the SSD could really keep up with the advertised write performance (which it won't) and you nonstop write with full speed, you would loose your 5 years warranty after only 55 hours of use.o_O

And keep the write amplification in mind ZFS will cause. Seen here factor 3 (big async sequential writes) to factor 81 (4k random sync writes) total write amplification (from process writing in guest to NAND), so the 600TB TBW might be exceeded after only writing 7.23TB to 200TB of data depending on the workload. So it really depends on what you are trying to do with it. MAybe they will die after some months, maybe they will last some years.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!