Optimizing proxmox

belrpr

Member
Aug 9, 2017
48
4
6
39
Hi,

I am looking for ways to optimize my proxmox / vm / lxc.
Currently I have proxmox installed on 2 samsung 840 pro 500Gb in ZFS Mirror.
The performance os the ssd are ok (100 Mbps) but not near the normal windows or ext4 levels.
Can anyone recommend ssd for proxmox / zfs?

For networking I am using openswitch and bonded 4 x 1Gb drives as 1.
Is this ok or are there better solutions?
Can the lxc containers get 10gb network or should I use multiple virtual nics and then bond them together?
 
if possible Ceph over 10Gb / 100Gb networks are better
from my personal experience, zfs was less success than Ceph or even HW RAID
 
Have here ZFS on 4 Samsung Enterprise SM863 Raid10. Copy with about 900MB/s. So no problem. But yes ZFS need more HW-Power. But have also a lot of more nice features. If you don't need ZFS Features use an other storage model.
 
Just trying to wrap my head around the fact that the Samsung 840 Pro 512Gb are so slow under proxmox compared to windows.
They used to be in a hardware raid 1 on esxi and were blazing fast.

Now I run a hdtune test on windows on the 256Gb disk in my pc (which is slower)
https://1drv.ms/u/s!AvFk6E8b2aSsgqMMsGmL4s1pfOWOjg

Then I ran a hdparm -t --direct /dev/sda & hdparm -t --direct /dev/sdj
SDA is just a sata drive and SDJ is the Samsung 840 pro 515Gb ssd.
https://1drv.ms/u/s!AvFk6E8b2aSsgqMNHfCNcMANOrloDA

even the sata drive is almost as fast as the ssd.
 
hdparm -t --direct /dev/sda

I just ran this exact test on a couple of my drives for you since i have a very similar set up to you. I am currently running ZFS mirror with 2x 250gb Samsung 960 EVOs as my main OS and I ran the test on an ext4 formatted external HDD (Seagate Expansion 8tb) and an 860 EVO which is currently acting as my ZIL and cache for the main ZFS mirror. Below are the results:

sdc = Seagate Expansion 8tb (ext4)
nvme0n1 = Samsung 960 EVO (ZFS mirror w/ identical drive)
sda = Samsung 860 EVO (ZFS w/ 8gb partition as ZIL, 150gb partition as ZFS cache, and the rest is used as a temp directory for vzdump)
https://i.imgur.com/kOxH80F.png

I noticed a pretty big uptick in speeds when i added a ZIL and cache to my zpool, plus the writes to my nvme drives are way lower now. I recommend giving that a shot if you haven't already.
 
Thx for running the tests.
Going to test with a 850 Evo 250Gb tomorrow to see if it is 840 related or maybe the IBM1015 IT mode raid card.
 
Code:
/dev/sdp:
 Timing O_DIRECT disk reads: 1510 MB in  3.00 seconds = 503.25 MB/sec
root@fs-hv-01-17:~# hdparm -t --direct /dev/sde
/dev/sde:
 Timing O_DIRECT disk reads: 540 MB in  3.00 seconds = 179.72 MB/sec
root@fs-hv-01-17:~# hdparm -t --direct /dev/sdj
/dev/sdj:
 Timing O_DIRECT disk reads: 602 MB in  3.02 seconds = 199.31 MB/sec
root@fs-hv-01-17:~# hdparm -t --direct /dev/sda
/dev/sda:
 Timing O_DIRECT disk reads: 392 MB in  3.01 seconds = 130.20 MB/sec
SDP = Samsung 850 EVO 250Gb
SDE & SDJ = Samsung 840 Pro 512Gb
SDA = Sata drive
 
As a related point, not directly related to your question, I saw this the other day:

https://blogs.technet.microsoft.com/filecab/2016/11/18/dont-do-it-consumer-ssd/

I thought it was a solid breakdown on performance issues between enterprise- and consumer-grade SSDs, and why you want the former (even though it's talking about a different storage platform.) Their graphs of IOPS were telling as well, and make a good argument for something like a Samsung SM863 over the Pro line.
 
Ok but I can't find a up 2 date list of "enterprise ssd's"
Tried looking for:
Samsung SM863
and Intel 3510 but both are no were in stock.

Is the Intel 3520 a good option?
 
Last edited:
"Temporarily out of stock."

And it is in the us. I live in europe which would add some taxes and stuff.
Is there a current list with ssd which are best to use with zfs?
 
Is there a current list with ssd which are best to use with zfs?

From the page on Ceph:

Again, if you expect good performance, always use enterprise class SSD only, we have good results in our testlabs with:
  • SATA SSDs:
    • Intel SSD DC S3520
    • Intel SSD DC S3610
    • Intel SSD DC S3700/S3710
    • Samsung SSD SM863
  • NVMe PCIe 3.0 x4 as journal:
    • Intel SSD DC P3700
 
Best list is still (for Ceph, but it tests the raw random I/O performance)

http://www.sebastien-han.fr/blog/20...-if-your-ssd-is-suitable-as-a-journal-device/

If you want to test the performance, please use the same program for Windows and Linux and also all filesystems. I can recommend fio, which runs on any Unix/Linux and Windows as well.

You also have to compare random write, best random read/write mix access to get the real performance. Enterprise SSDs are blazing fast for that type of access. You do not want to test sequential reading capabilities of SSDs, because a disk array will outperform an SSD array with respect to cost per GB/s throughput.
 
Devils advocate question, why are you choosing to use ZFS at all in your proxmox install/config ? Is there specific requirement that drives this? My experience would suggest the simpler thing to do - is avoid ZFS.

Just 2 cents sort of question/comment.

Thanks,

Tim
 
Just 2 cents sort of question/comment.

First and most important thing is: software raid (this is the only option if you want to use software raid that is supported by Proxmox). If you do not have a hardware raid controller, this is the only option to go (in a fully-supported system, of course).

Furthermore, there is no other filesystem available in a non-clustered environment that has all of these features:
* thin provisioning
* transparent compression
* COW clones
* silent data corruption prevention techniques
* easy, incremental replication that can be used to backup your data
* IF you're on a beefy machine: deduplication

My experience would suggest the simpler thing to do - is avoid ZFS.

That's one way to go. In my experience, if you have ZFS everywhere, you are very, very flexible and every machine can be managed by the same commands (so no special hardware raid controller management software, etc.), so using ZFS is the simpler thing to do in such a setup.

If you climbed the sometimes steep learning curve with ZFS, you will be rewarded and maybe changed forever :-D
 
  • Like
Reactions: NewDude

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!