[SOLVED] Setup opinions

Jul 16, 2018
20
1
1
33
Uruguay
www.netuy.net
Hello!

the company where i work is adding 2 new Dell R740 servers with proxmox, to replace some old servers with Hyper-v. The servers have dual Xeon Silver CPUs, 256 GB RAM, PERC H740 raid controller, 3 SSD 120 GB and 4 HDD 4TB.
I have set them up in a cluster, with 2 SDD configured as physical raid 1 for the system (with lvm), the 4 HDD as ZFS with 2 way mirror, and the remaining SSD as ZIF and LARC. All the ZFS disks have every cache disabled at the controller.

If i run pveperf i get this:
CPU BOGOMIPS: 134426.40
REGEX/SECOND: 2956506
HD SIZE: 106.98 GB (/dev/mapper/pve-root)
BUFFERED READS: 1358.36 MB/sec
AVERAGE SEEK TIME: 0.10 ms
FSYNCS/SECOND: 4840.09
DNS EXT: 56.17 ms
DNS INT: 51.92 ms
And i run these commands to setup the ZFS pool and units:
zpool create -f -o ashift=12 storage-pool mirror /dev/sdb /dev/sdc mirror /dev/sdd /dev/sde cache /dev/sdf2 log /dev/sdf1
zfs create storage-pool/vm-disks
zfs set compression=on storage-pool/vm-disks
Do you think this is a good setup? Are better alternatives with this hardware?
Any advice is welcome!

Thank you very much.

Juan.
 
Hello,

thanks for your feedback Guletz!

We now have both servers running proxmox, and we are setting up the finishing details to the configuration.
Originally we had 2 SSD on RAID 1 for the system and the other SSD standalone for ZFS cache. Testing from a VM with fio it gave around 45K iops for read and 15K iops for writing.

After that we tested using a RAID 5 with the 3 SSD for both the system and ZFS cache, and it had a nice performance upgrade. With this setup it gave 60K iops for reading and 20K iops for writing.
So now we have both servers with one RAID 5 for the system and cache, and the 4 HDD as a 2 way mirror for VM storage.

From what i have read online this iops values are ok, am i right? Or this wouldnt be enough for multiple VMs, mainly web servers?

Thank you very much!

Juan.
 
Hello!

i used fio:
fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75

This page was my reference: binarylane com au/support/solutions/articles/1000055889-how-to-benchmark-disk-i-o
From what i see there, the value i got is around what should be expected.

Thank you very much!

Juan
 
Hello,

Thanks! I tried with 3 centos VM the fio test at the same time and the results were almost the same. I also monitored the io usage from one vm with iotop and iostat while running fio on another vm on the same physical server, and it didnt affect it too much.

Your sharing is really useful, im really grateful for it! Now we can keep moving forward with our setup with more confidence.

Thank you very much!

Juan
 
Hi, @jmcorrea

It is look ok. Also thx a lot for yours feedback. This is important for people who start to use PMX, because they can inspired from success story like in your case.

Good luck!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!