Made some tests on the same machine with 4 x Samsung PM1643 SAS SSDs on a Perc H730p controller in a Raid10.
The performance in my 8 benchmark VMs on the same dbench based test is more than double what i get from the 14 x NVMe ZFS raid. :(
Dear Wolfgang thanks for your help, i just wrote you a direct message. I also bought a license for this host, so we can start a more depth analysis and i can give you ssh access to the machine.
The linked article says that CPU usage is coming from ZFS compression. I have already disabled compression, just to be sure that CPU usage by compressing is no problem here.
Compression is off on the pool, and compression is off on each vm disk:
root@pve:~# zfs get all pve1-nvme | grep...
During the testing after some time he startet spitting the following errors on the Proxmox host:
[ 5889.944895] Uhhuh. NMI received for unknown reason 2d on CPU 16.
[ 5889.944896] Do you have a strange power saving mode enabled?
[ 5889.944897] Dazed and confused, but trying to continue
[...
I have now cloned a basic Debian 10 VM and made 8 Test VMs. Running a simple "dbench -s 10" on all VMs in parallel shows that each VM does not get more than 140 MB / Sec.
While this simple tests runs kvm processes on the host use up a high amount of CPU, maybe this is normal. But there are...
I have testet with various easy benchmarks, there is no claim to be perfect. It was meant to have a short performance overview. We now have tested to recreate the ZFS pool as a Raid60 (2 x VDEVs Raidz2 with 7 drives each) and i know that performance is worse in Raidz2, but it is reall really...
I know that the strenght with NVMe is with many parallel IOs but there should be at least more than 500mb/sec possible inside a Vm on a huge 14 x P4510 Raid10 or am i wrong? Also it should be a lot faster to clone a 750Gb VM on that array than almost 1 hour runtime and this huge CPU load during...
What i also see is that the performance when benchmarking the disks directly on the host is about 3-3.5gb / sec. When doing some rough benchmarks inside of a VM i only get about 450-500mb/sec. But while trying some failover and disk replacement tests the resilvering of ZFS was running with...
No problem:
- Dell Poweredge R7515
- 1 x AMD Epyc 7302 (16 x 3.0GHz)
- 16 x 32 Gb Samsung DDR4 PC2933 Registered ECC Memory
- 2 x 240 Gb SATA SSD (Micron 5100 Enterprise m.2 SSDs) with Dell BOSS Card (Marvel AHCI Raid1) for Proxmox OS
- 14 x Intel P4510 2Tb NVMe SSDs directly connected to PCIe...
We are currently running tests to evaluate Proxmox as a Xenserver alternative.
I have a Dell R7515 (AMD 7302P based) with 14 x Intel NVME P4510 2Tb in a ZFS Raid10 (OS is running on another device, so the NVMe Disks are pure VM storage).
I now have a VM with 750 Gb HDD on that ZFS Raid10 and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.