VM HDD read write speed about 25% less than direct on node speed

GoZippy

Active Member
Nov 27, 2020
113
2
38
45
www.gozippy.com
Was looking today and noticed significant slower speeds on ubuntu guest VM for hdparam -Tt /dev/sda2 than for that same partition drive on the host node directly on console.

This is directly on proxmox node to the attached SSD /sda2
1644773565099.png

This is directly on proxmox node to the attached SATA HDD /dev/sdb
1644773841390.png




This is on Ubuntu guest VM on that same node installed to a ceph
1644773605604.png
1644773965887.png
1644773977321.png
1644773995118.png

Funny how the buffered read keeps getting better ...

Anyhow - the performance for the guest VM read from that same HDD seems off... maybe its because the host node is reading direct to the attached HDD and the VM is going thru the OSD layer..?

Seems significant reduction though...

Any places I can check to learn more and figure out how to speed access up a little for the VM's?

Ideas?

Using the entire host 1TB drive as OSD for CephPool1
Created VM using CephPool1 for HDD - VirtIO SCSI - default, no cache on the VM HDD setup.
 
Testing further - changed guest HDD to "emulate SSD"
1644774598401.png

and it seems to have increased performance a bit... from 7900-8200 to now 8200-8650 Mb/sec is 3 to 5% improvement but not anywhere close to the direct access on the PM node at around 11,500... So about 25% less disk performance on VM than on node direct...

1644774759681.png
1644774772502.png
1644774783753.png
1644774798295.png




Any other tweak I can do to boost speeds without too much craziness?
 
yes - I put that in the info above...

Using the entire host 1TB HDD drive as OSD for CephPool1 (9 other nodes with 1TB drives on ceph osd and 1 machine with 8 more drives all setup as osd)...

Created VM using CephPool1 for HDD - VirtIO SCSI - default, no cache on the VM HDD setup.

wondering if I can improve performance a little with the ceph pool layer and get it to use local host for faster direct access on ceph

When I changed the VM settings for the virtual hdd to enable SSD Emulation - it helped 5% faster but still is far off from where it should be... guessing there is some issue with ceph performance settings on that local node or if I can change anything about the virtual hdd setup with cache or io thread settings...

sorry - still learning...
 
If you change the SCSI Controller to "VirtIO SCSI single" and enable IO Thread on your VM Disks, it can improve IO performance when having multiple virtual disks per VM.

On top of Ceph, consider using cache=writeback to help with performance. According to Proxmox' benchmarks it can drastically improve write performance. In your Proxmox Storage config, is "KRBD" enabled on your Ceph Storage? This can reduce CPU load and thus improve your performance as well.

Do I understand this correctly - you have 9 Nodes wich 1 OSD per Node and one Node with 8 OSDs?
That sounds terribly unbalanced and will most probably not resist a Host failure of the 8-OSD node.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!