looking for expert, speed up disk performance

gijsbert

Active Member
Oct 13, 2008
47
3
28
We run proxmox on Supermicro systems, using local storage, Intel SSD-disks in software RAID-10 + spare. I am aware of the fact that software-raid is not adviced, but we just love software raid :)

We have done several tests and we have some problems with the so called "time to first byte" on qemu vm's using virtio. The average time on websites for this time to first byte takes roughly 2 seconds.

When I migrate this VM to a dedicated machine with 2 x 7200 rpm disks, using software raid-1 the time to first byte is 1 second.

In both cases we use software-raid and I would expect the time to first byte would be much faster on Intel ssd's than old 7200 rpm disks, but it's not :(

I'm looking for some expert guy who can test some tweaks like desribed on:

https://pve.proxmox.com/wiki/Performance_Tweaks

I have zero experience in setting up barrier option in fstab and I'm a little afraid to screw things up. Anyone willing to help (commercial not a problem) or have experience in setting up tweaks and/or create some more tests to improve this time to first byte?
 
on one node we use 960 Gb intel S3520 series. So for software raid-10 and hotspare it's a total of 5 disks per server. We don't use ZFS (yet) becuase we started historical with software raid and use puppet kind of installs. Did not find the time to migrate to another kind of storage. In stead of ZFS I prefer CEPH though. But the question remain, is there so much overhead using qemu vm's / virtio ? Why is "time to first byte" 2 times faster on good old 7200 rpm's in software raid-1 over Intel SSD software raid-10
 
>>Why is "time to first byte" 2 times faster on good old 7200 rpm's in software raid-1 over Intel SSD software raid-10

you should profile your web application to see which function is doing disk access. (BTW, even 1s is quite huge for TTFB. Seem strange that it's only disk related. Are you sure that cpu or database queries aree not the bottleneck ?)

(php ? if yes, do you have disable openbasedir ? enabled opcache ?)
 
on one node we use 960 Gb intel S3520 series. So for software raid-10 and hotspare it's a total of 5 disks per server. We don't use ZFS (yet) becuase we started historical with software raid and use puppet kind of installs. Did not find the time to migrate to another kind of storage. In stead of ZFS I prefer CEPH though. But the question remain, is there so much overhead using qemu vm's / virtio ? Why is "time to first byte" 2 times faster on good old 7200 rpm's in software raid-1 over Intel SSD software raid-10
Hi,
puppet is nice but try to install one pve-node with the normal iso-installer (and zfs raid10) and see if the same behavior occur.
I use puppet too, but do an normal install to use zfs and lvm-thin and so on and puppetize after that (for packages, accounts...).

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!