OpenVZ Ploop support

zzhjkrqlne

Renowned Member
Oct 16, 2008
38
0
71
Hi,

What is the status of OpenVZ Ploop support under Proxmox? I wanted to test it out and it looks like it's not fully supported at this stage.

Code:
# vzctl create 999 --ostemplate centos-6-x86_64 --layout ploop --diskspace 10G
Warning: ploop support is not compiled in
Creation of container private area failed

I understand the implications of using unstable software. I'm just interested in getting it going for the purpose of learning more about it at this stage, so it can be used when it's stabilised eventually.
 
Are you able to give further info. on what needs to be compiled? I've already compiled and installed ploop and ploop-lib packages prior to testing the above.
 
Hmm, looks like it'll be a bit more troublesome than I first thought. Any chance of it being added to the official Proxmox vzctl release in the near future?
 
Hi Dietmar,

3 month ahead, any news about the ploop implementation?
I think everybody with fast SAN configurations hardly want this feature to get speed and HA with containers.


BTW: Maybe it would be a nice idea to setup a benchmark site inside this forum, so one could compare his values with similar installations.


Greetings from Solingen
Udo
 
Hi Dietmar,

maybe, would it possible to get ploop within the pvetest repo?

I ask for, because i did some performance tests with openvz inside a KVM. One test with centos 6.3, latest openvz repo/kernel 2.6.32-042stab072.10 with ploop, a second test with proxmox ve 2.2 and standard simfs inside a KVM. Both instances with VirtIO Drivers. Storage for both tests are HP P 2000 (SAS) with a bunch of raid 10 spindels. Performance with the openvz and ploop was min. 50% faster! So it would be nice to have ploop in the pvetest repo.

P. S. Over the last weekend i used fakeroot/alien to convert some rpms to deb to test with pve, but receive problems with the vzctl.

Greetings
Udo
 
pls tell details about your benchmark setup, post results here.
 
Hi Tom,

i did a new Test-Setup to get fair Results. So here we go:

OpenVZ in KVM - Performance-Test


1. Proxmox VE 2.2 - Kernel 2.6.32-16-pve
2. CentOS 6.3 - Kernel 2.6.32-042stab072.10


Both VM's:
CPU's 4 (2 Sockets, 2 Cores)
8 GB RAM
64 GB Disk (VirtIO) at HP P 2000 (SAS)




OpenVZ/simfs, 8GB Disk, 2GB RAM (256MB Swap), Debian 6 at PVE (1)


ioping -c 10 -s 1M /tmp
--- /tmp (simfs /var/lib/vz/private/100) ioping statistics ---
10 requests completed in 9035.0 ms, 354 iops, 353.6 mb/s
min/avg/max/mdev = 2.5/2.8/3.2/0.2 ms
---
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.80018 s, 283 MB/s
---


Bonnie++
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
test 2G:64k 433 99 291120 99 354716 61 2557 99 2934943 100 8731 306
Latency 168ms 110ms 4418us 29814us 427us 25175us
Version 1.96 ------Sequential Create------ --------Random Create--------
test -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 537 9 +++++ +++ 586 8 489 9 +++++ +++ 589 8
Latency 29137us 476us 36149us 37879us 64us 26675us








OpenVZ/ploop, 8GB Disk, 2GB RAM (256MB Swap), Debian 6 at CentOS 6.3 (2)


ioping -c 10 -s 1M /tmp
--- /tmp (ext4 /dev/ploop35697p1) ioping statistics ---
10 requests completed in 9020.9 ms, 611 iops, 610.6 mb/s
min/avg/max/mdev = 1.5/1.6/1.9/0.1 ms
---

dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.74762 s, 287 MB/s
---


Bonnie++
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size:chnk K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
ovz1 2G:64k 320 80 252414 40 357251 47 1567 81 2045109 76 4525 114
Latency 223ms 20164us 20416us 54030us 22472us 41560us
Version 1.96 ------Sequential Create------ --------Random Create--------
ovz1 -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 388 5 +++++ +++ 324 4 400 5 +++++ +++ 318 4
Latency 34433us 430us 35490us 30900us 187us 62252us



Results:
Under this new conditions we got good results in both Containers. With "ioping" we see far better results with ploop.


Best regards
Udo
 
Because i maybe fooled myself with ploop and the results of ioping, i did another Benchmark. This time i use dbench 4.0:

OpenVZ/simfs, 8GB Disk, 2GB RAM (256MB Swap), Debian 6 at PVE (1)
# test
# dbench version 4.00 - Copyright Andrew Tridgell 1999-2004
# dbench -D /tmp 100


Operation Count AvgLat MaxLat
----------------------------------------
NTCreateX 6133769 0.152 387.575
Close 4504046 0.009 159.839
Rename 259963 1.790 368.876
Unlink 1239710 1.726 548.317
Qpathinfo 5561228 0.068 391.474
Qfileinfo 969418 0.005 97.485
Qfsinfo 1019738 0.018 193.475
Sfileinfo 500363 3.044 392.061
Find 2149242 0.092 484.297
WriteX 3029974 1.238 527.861
ReadX 9620709 0.068 479.666
LockX 19964 0.006 9.173
UnlockX 19964 0.003 7.890
Flush 430382 114.964 887.775


Throughput 318.222 MB/sec 100 clients 100 procs max_latency=887.779 ms






OpenVZ/ploop, 8GB Disk, 2GB RAM (256MB Swap), Debian 6 at CentOS 6.3 (2)
# ovz1
# dbench version 4.00 - Copyright Andrew Tridgell 1999-2004
# dbench -D /tmp 100


Operation Count AvgLat MaxLat
----------------------------------------
NTCreateX 4600440 1.555 667.315
Close 3377087 0.014 185.559
Rename 195074 9.302 458.379
Unlink 930834 13.186 661.092
Qpathinfo 4172345 1.086 533.454
Qfileinfo 727292 0.008 148.056
Qfsinfo 765739 0.025 121.546
Sfileinfo 374778 10.617 398.732
Find 1613177 0.418 316.398
WriteX 2273291 7.589 1006.579
ReadX 7222195 0.583 624.513
LockX 15016 0.017 25.526
UnlockX 15016 0.013 38.758
Flush 322544 22.953 1034.889


Throughput 239.02 MB/sec 100 clients 100 procs max_latency=1034.898 ms


With dbench we receiver better results with simfs!

Best Regards
Udo
 
I noticed how you performed the ioping tests in /tmp. Maybe /tmp is mounted as tmpfs (=resides in RAM) in that container? thatd be the only explanation I can offer for these bogus test results since >300MB/s seems unreal, even for non-RAID0 SAS disks.

Also, openvz is basically a (very) glorified bsd-jails and simfs isnt actually a filesystem (they only simutelate a superblock so you can have user quotas inside the containers) you should always see 100% native I/O performance whereas ploop adds a layer on top of that, so its even theoretically impossible to be faster (unless it silently used compression to mess with your testing)

Anyhow, thanks for posting benchmarks. I think more extensive benchmarks would be worth it once ploop can use qcow2 images (its planned from what I can tell).
 
I noticed how you performed the ioping tests in /tmp. Maybe /tmp is mounted as tmpfs (=resides in RAM) in that container? thatd be the only explanation I can offer for these bogus test results since >300MB/s seems unreal, even for non-RAID0 SAS disks.

Tnx. mo for ur reply. So i did another ioping benchmark with /home:

root@ovz1:/# ioping -c 10 -s 1M /home
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=1 time=1.7 ms
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=2 time=2.0 ms
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=3 time=2.2 ms
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=4 time=2.1 ms
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=5 time=1.5 ms
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=6 time=1.5 ms
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=7 time=1.5 ms
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=8 time=1.5 ms
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=9 time=1.4 ms
1048576 bytes from /home (ext4 /dev/ploop35697p1): request=10 time=1.5 ms


--- /home (ext4 /dev/ploop35697p1) ioping statistics ---
10 requests completed in 9020.2 ms, 590 iops, 589.9 mb/s
min/avg/max/mdev = 1.4/1.7/2.2/0.3 ms


Greetings
Udo
 
You are sure that your data actually reaches the disk? That it is just not looping in RAM!

This is from a SSD disk (Intel SSD 330)
--- /home/mir (ext4 /dev/disk/by-uuid/aee6bc10-58e7-4966-a6c1-f1822dcee938) ioping statistics ---
10 requests completed in 9054.6 ms, 193 iops, 193.3 mb/s
min/avg/max/mdev = 4.6/5.2/5.8/0.4 ms
 
Last edited:
You are sure that your data actually reaches the disk? That it is just not looping in RAM!

No mir, I'am not shure about the ioping results!

Therefor i did more benchmarks with dd, dbench and bonnie++
I also explaned the setup in the first tests. And as mo comment that i could have used a /tmpfs within /tmp, i repeated the tests again, but this time to /home. The results are the same.

Best Regards
Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!