Abysmal mapped disk performance (there's more to it...)

cvega

Member
Oct 30, 2019
13
3
8
44
Hiya folks,

a bit new to Proxmox (loving it compared to ESX), but I've a hit a snag.

Here's my config; yes, it's a ghetto old PC that i've converted to be a server.
i7-3770 on asus p8z68-v lx board;
32GB DDR3
Adaptec 5805Z (no battery) with RAID bios (I know... waiting on a decent IT mode card)

drives connected to the motherboard SATA connectors:
120gb ssd (OS)
500gb ssd (some windows VMs)
WD 750gb

3 x 1tb wd caviar for "critical" VMs (in ZFS mirror). This is where my windows server, ZoneMinder and two linux VM's live.

Furthermore, I have 4 HDD's connected as JBOD to the Adaptec card.

2 x hitachi 1tb
2 x seagate 1tb.

The above 1TB drives all have linux MD RAID1 arrays (2 disks each array) on them from previous machines. THe way I've configured them is passed them through to one of the linux VM's using "qm set xxx -virtio2 /dev/disk/by-id/xxx1" etc for all four disks; They all show up in the VM correctly.

disks_vm_2.PNG

HOwever, read speeds from those arrays are terrible. Didn't have this issue in esx. What's gone wrong? Any other tests I could try?

The VM has it's root drive on the 3x1tb array, here's it's speed results:

root@hulk:~# dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync && dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.38018 s, 168 MB/s
1000+0 records in
1000+0 records out
512000 bytes (512 kB, 500 KiB) copied, 29.1812 s, 17.5 kB/s

Not perfect but I can live with that. However when running the same test on the passed-through drives, it's quite a lot worse (/storage/misc is one of the RAID1 arrays)
:
root@hulk:/storage/misc# dd if=/dev/zero of=test3.img bs=1G count=1 oflag=dsync && dd if=/dev/zero of=test4.img bs=512 count=1000 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 36.1309 s, 29.7 MB/s
1000+0 records in
1000+0 records out
512000 bytes (512 kB, 500 KiB) copied, 55.739 s, 9.2 kB/s
 
Last edited:
HOwever, read speeds from those arrays are terrible. Didn't have this issue in esx. What's gone wrong? Any other tests I could try?
The passthrough on ESX sure works differently. Here the disks are not actually passed through, but the path is used by qemu. For passthrough, the whole disk controller would need to be passed through.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!