Questions about LVM / MDRAID

twister988

New Member
Feb 10, 2012
8
0
1
Hello together.

First, i want to introduce myself.
My Name is Ralph, i am 23 years old and i live in Kerpen (near Cologne) in Germany.

I have made some experiences with Proxmox in Home-Use since Version 1.6 to learn something about hardware-virtualization.

I began with a small 2-Core System, 3GB RAM, 120GB HDD for Proxmox-System an vmdk´s and 1,5TB HDD for Storage within a VM (LVM).

Now, i am on a Intel Core i5 2500k with 16GB RAM, 120GB HDD for Proxmox System, 1,5TB HDD for Storage (as written before) and 2x500GB Samsung S-ATAII HDDs @mdraid0 for vmdk´s. Proxmox is NOT installed on MDRAID0!

I just set up a number of VM´s they are running quite fine.
But: I created an extra virtual HDD for my Backup-Server (Cache before writing on Tape) with the size of 200GB and i see a poor performance when formatting the "cache-hdd" within the VM (mkfs.ext4 /dev/vdb1) - it lasts now for over 40 Minutes and is still on writing inode-tables (1300/1600).

Is that poor performance caused by MDRAID0?
Is there a way to improve performance?
-> Maybe LVM Striping - is that supported?
-> in VM.conf: cache=no maybe?

Are there any Pro´s/Con´s to LVM compared to MDRAID, except the simple managing of storage in LVM (what would be a big reason to switch from mdraid to lvm...)?

Or are there any other suggestions, except of buying more HDDs?

PS:
When VM´s are not on load, pveperf gets about 260mb/s on mdraid0

Thanks in advance for your answers!

Kind regards,
Ralph!
 
Hi Ralph,
IO is often the bottleneck on virtualisation. I have some experience with software-raids but not on pve.
The pve-staff don't support software-raids and i guess i know why - but on the other side have both (software versus hardware raid) pros and cons. I have also see hardware raids which are (much) slower than software raids (lsi on sun hardware - crappy things).

You use your disk as stripe? Be aware, most data are importand enough to be secured against single disk failures (this speak also again lvm-stripping).
500GB Samsung? Sounds not like a very fast disk - perhaps this is one bottleneck?

Are there other IOs on the device?
How looks pveperf on the md-device (if you create an lv with filesystem and mount that on the host)?

BTW. Backup-client and cache-disk. Backup need much IO, which isn't the power of kvm (need cpu-power too).
I have an backup-server running on openvz, which is running fast (can support the lto-4 drives with 100 MB/s from the cache disks). With kvm is the same not so easy!

Udo
 
Last edited:
Hi Udo and thanks for your answer.

Now the qoute-mania begins :D

Hi Ralph,
IO is often the bottleneck on virtualisation. I have some experience with software-raids but not on pve.
The pve-staff don't support software-raids and i guess i know why - but on the other side have both (software versus hardware raid) pros and cons. I have also see hardware raids which are (much) slower than software raids (lsi on sun hardware - crappy things).

So there is no diference between LVM and MDRAID? I guess both are "Software-RAID" for you, right?

You use your disk as stripe? Be aware, most data are importand enough to be secured against single disk failures (this speak also again lvm-stripping).
500GB Samsung? Sounds not like a very fast disk - perhaps this is one bottleneck?

Are there other IOs on the device?
How looks pveperf on the md-device (if you create an lv with filesystem and mount that on the host)?

I wouldn´t say, it is slow:
PVEPERF on md0, where also the vmdk´s are:
Code:
proxmox:~# pveperf /dev/md0
CPU BOGOMIPS:      26476.90
REGEX/SECOND:      1298876
HD SIZE:           916.90 GB (/dev/md0)
BUFFERED READS:    251.20 MB/sec
AVERAGE SEEK TIME: 14.22 ms

HDPARM on single disks:
Code:
proxmox:~# hdparm -tT /dev/sdc

/dev/sdc:
 Timing cached reads:   21980 MB in  2.00 seconds = 11000.98 MB/sec
 Timing buffered disk reads:  404 MB in  3.01 seconds = 134.23 MB/sec
proxmox:~# hdparm -tT /dev/sdd

/dev/sdd:
 Timing cached reads:   21794 MB in  2.00 seconds = 10907.67 MB/sec
 Timing buffered disk reads:  402 MB in  3.01 seconds = 133.41 MB/sec

dd´s on md0:

Code:
proxmox:~# time dd if=/dev/zero of=/mnt/proxmox-raid0-vms/dd.img [B]bs=1M count=2k[/B] conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 10.5349 s, 204 MB/s

real    0m10.783s
user    0m0.002s
sys     0m1.881s

proxmox:~# time dd if=/dev/zero of=/mnt/proxmox-raid0-vms/dd.img [B]bs=1k count=2M[/B] conv=fdatasync
2097152+0 records in
2097152+0 records out
2147483648 bytes (2.1 GB) copied, 11.4826 s, 187 MB/s

real    0m11.734s
user    0m0.161s
sys     0m3.689s

BTW. Backup-client and cache-disk. Backup need much IO, which isn't the power of kvm (need cpu-power too).
I have an backup-server running on openvz, which is running fast (can support the lto-4 drives with 100 MB/s from the cache disks). With kvm is the same not so easy!

Well, i have just an old LTO-1 drive with a wide-SCSI-Conrtoller... i think, kvm can handle speeds around 15-20MB/s nor?

Kind regards,

Ralph
 
Hi Udo and thanks for your answer.

Now the qoute-mania begins :D
right ;) - but I shorten some things.
So there is no diference between LVM and MDRAID? I guess both are "Software-RAID" for you, right?
Of course there are differences, but software raid-0 is comparable to lvm-striping.
I wouldn´t say, it is slow:
PVEPERF on md0, where also the vmdk´s are:
Code:
proxmox:~# pveperf /dev/md0
CPU BOGOMIPS:      26476.90
REGEX/SECOND:      1298876
HD SIZE:           916.90 GB (/dev/md0)
BUFFERED READS:    251.20 MB/sec
AVERAGE SEEK TIME: 14.22 ms
right, that's looks good, but how are the fsyncs/s? And the average seek time is a little bit high.
...
Well, i have just an old LTO-1 drive with a wide-SCSI-Conrtoller... i think, kvm can handle speeds around 15-20MB/s nor?
Right - 20 MB/s is no problem. OpenVZ has the effort of an smaller footprint, but with kvm you are more flexible...

Udo
 
right, that's looks good, but how are the fsyncs/s? And the average seek time is a little bit high.

Thats a good question.
after the seek Time, pveperf aborts with an Syntax error on line 86 iirc.
what does that mean?

i am on Mobile now, so i can not Check...

Ralph
 
Hi again.
I tried pveperf once more, after i read the information, that pveperf needs a mounted filesystem to work correctly:
Information: MD0 had file-system ext4... i read something about that ext4 has bad performance compared to ext3.

Code:
proxmox:~# pveperf /mnt/proxmox-raid0-vms
CPU BOGOMIPS:      26476.20
REGEX/SECOND:      1339785
HD SIZE:           916.90 GB (/dev/md0)
BUFFERED READS:    223.81 MB/sec
AVERAGE SEEK TIME: 16.41 ms
[B]FSYNCS/SECOND:     51.98[/B]
DNS EXT:           82.33 ms
DNS INT:           0.41 ms (xxxxx.local)

FSYNCS were very, very slow imho. i guess that was the issue for my low (freezy) performance.
So after that result i decided to switch from mdadm to lvm2.
I stopped all vm´s, backupped the vmdks, stopped and deleted the array, deleted the partitions via fdisk, uninstalled mdadm and did following to create a striped volume group with two logical volumes over 2 hdd´s:

Code:
pvecreate /dev/sdc /dev/sdd
vgcreate lvm_stripe /dev/sdc /dev/sdd
lvcreate -i2 -I4 --size 700G -n virtual_disks lvm_stripe /dev/sdc /dev/sdd
lvcreate -i2 -I4 --size 200G -n backup lvm_stripe /dev/sdc /dev/sdd
mkfs.ext3 /dev/lvm_stripe/virtual_disks
mkfs.ext3 /dev/lvm_stripe/backup
tune2fs -r 0 /dev/lvm_stripe/virtual_disks
tune2fs -r 0 /dev/lvm_stripe/backup
echo /dev/lvm_stripe/virtual_disks /mnt/proxmox-lvm_stripe-vms ext3 defaults 0 1 >> /etc/fstab
echo /dev/lvm_stripe/backup /mnt/proxmox-lvm_stripe-backup ext3 defaults 0 1 >> /etc/fstab
mount -a

two stripe-volumes (-i2)
stripe-size (-I4) of 4k is good, i think.

Summary:
- created volume group via LVM
- created two logical volumes, striped over two identical hard-disks
- filesystem ext3 (md0 had ext4)
- mounted logical volumes
- added Storages -> Directory via Web-GUI
- copied vmdks back to logical volume for virtual_disks.
- started all kvms - it worked... YES!
- backup-kvm got the logical volume "backup" as second hard drive.

Finally, everthing worked as before, so i tried pveperf a few times more...
I was very surprised, when i saw THIS:

Code:
proxmox:~# pveperf /mnt/proxmox-lvm_stripe-vms
CPU BOGOMIPS:      26476.20
REGEX/SECOND:      1339785
HD SIZE:           689.02 GB (/dev/mapper/lvm_stripe-virtual_disks)
BUFFERED READS:    253.82 MB/sec
AVERAGE SEEK TIME: 12.37 ms
[B]FSYNCS/SECOND:     1631.98[/B]
DNS EXT:           77.36 ms
DNS INT:           0.32 ms (xxxxx.local)

I think, it is not a bad result for two 7200upm S-ATA 300 HDDs.

I will test it a few more days and will report again!

We will see, but i guess mdadm was the main problem here...

Kind regards from Germany,
Ralph