Hi,
I'm evaluating Proxmox 5 on a test server and the storage performance in guest seems very poor compared to that of the host. The root cause seems the zvol performance.
In the past week I've read various threads started by people with similar problems but none of their solutions worked for me.
I can go the LVM way but async replication and live migration with local storage are look like good features.
Maybe you can help.
Write performance: 10% of that of the host
Read performance: 30% of that of the host
Details:
Host:
HP DL380 G6, 48 GB RAM ECC, P410i raid controller
HDDs: 4 x 146GB SAS @ 10k rpm
rpool is raidz1-0 with 4 disks.
os: Proxmox 5
Guest:
os: debian9
ram: 4G
virtualization: kvm
disk: scsi or virtio, cache: no cache / write back (tried multiple options, the performance is no much different)
disk size: 32 GB / 120 GB (multiple tests, doesn't seem to matter)
fs: ext4, 4k block size
Tests:
A. Host/write:
root@dnm:/rpool/ROOT/pve-1# dd if=/dev/urandom of=sample.txt bs=64M
count=1562 iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 575.276 s, 182 MB/s
B. Host/read:
root@dnm:/rpool/ROOT/pve-1# dd if=sample.txt of=/dev/zero bs=64M
count=1562 iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 446.276 s, 235 MB/s
C. Guest/write
root@debian:~# dd if=/dev/urandom of=sample.txt bs=64M count=1562
iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 5275.91 s, 19.9 MB/s
D. Guest/read
root@debian:~# dd if=sample.txt of=/dev/zero bs=64M count=1562
iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 1455.35 s, 72.0 MB/s
Tried with various bs and file sizes.
Tried with zvol volblocksize=4k (to match the ext4 block size).
Added log and cache ssds (cheaper Intel ones, I know). Performance seems worse.
Thanks,
Bogdan.
I'm evaluating Proxmox 5 on a test server and the storage performance in guest seems very poor compared to that of the host. The root cause seems the zvol performance.
In the past week I've read various threads started by people with similar problems but none of their solutions worked for me.
I can go the LVM way but async replication and live migration with local storage are look like good features.
Maybe you can help.
Write performance: 10% of that of the host
Read performance: 30% of that of the host
Details:
Host:
HP DL380 G6, 48 GB RAM ECC, P410i raid controller
HDDs: 4 x 146GB SAS @ 10k rpm
rpool is raidz1-0 with 4 disks.
os: Proxmox 5
Guest:
os: debian9
ram: 4G
virtualization: kvm
disk: scsi or virtio, cache: no cache / write back (tried multiple options, the performance is no much different)
disk size: 32 GB / 120 GB (multiple tests, doesn't seem to matter)
fs: ext4, 4k block size
Tests:
A. Host/write:
root@dnm:/rpool/ROOT/pve-1# dd if=/dev/urandom of=sample.txt bs=64M
count=1562 iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 575.276 s, 182 MB/s
B. Host/read:
root@dnm:/rpool/ROOT/pve-1# dd if=sample.txt of=/dev/zero bs=64M
count=1562 iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 446.276 s, 235 MB/s
C. Guest/write
root@debian:~# dd if=/dev/urandom of=sample.txt bs=64M count=1562
iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 5275.91 s, 19.9 MB/s
D. Guest/read
root@debian:~# dd if=sample.txt of=/dev/zero bs=64M count=1562
iflag=fullblock
1562+0 records in
1562+0 records out
104824045568 bytes (105 GB, 98 GiB) copied, 1455.35 s, 72.0 MB/s
Tried with various bs and file sizes.
Tried with zvol volblocksize=4k (to match the ext4 block size).
Added log and cache ssds (cheaper Intel ones, I know). Performance seems worse.
Thanks,
Bogdan.