Software RAID6 as Local VM Storage Very Slow

shawnk

New Member
Sep 9, 2016
11
0
1
44
Hi -

I am in the process of setting up a PVE 4 environment. I have multiple nodes that will be clustered, but for now I am testing it on one node that has 6 physical discs. /sda is the PVE OS and then I have combined the other disks into a software RAID6 drive using mdadm. I then ran the pvcreate and vgcreate commands as per this article.

I am able to run PVE just fine and I added this storage. However I have noticed that the performance of the VMs running have been very very slow. I have installed clean OSs of Ubuntu 14.05 (super slow) and SELinux 6.5 (a bit faster).

Is there something I am missing here? Is software RAID6 not supported or does it cause issues? Any advice/feedback would be great!

Thanks!
s
 
I have PVE on software raid 5 with 4 hdd and its performance is very good. May be you cpu is too slow for software raid6?
 
thanks for the response. but yea, this is a dell C6220, it's a really powerful server. the proxmox UI loads fine, it's the VMs that are super slow.

additionally a clean install of Ubuntu can't seem to use virtio for networking. this is all a bit off it seems to me.
 
Shutdown all VMs, Run pveperf /mnt/raid6dir to check your baseline reads.
Run this to check writes:
dd if=/dev/zero of=/mnt/raid6/somefile bs=1024k count=8192 conv=fdatasync

Run "top" on PVE as you start your VM.... watch gui for io delay graph on host machine summary.

For linux VM raid6 should be fine, if you run windows VM, you will soon run in to write wall. If you are running Ubuntu, LXC container should be faster. I have found a number of things I dont like on newer Ubuntu, and seems to find 12.04 just runs without asking questions.

You have burned 2 disks out of 6 by using raid6... if they were 1tb drives that means you have 4tb usable, using those 6 drives in a raid10 would give better perf, and space of 3tb= loss of only 1tb/25%, buying 1 more pair of disks would account for that if you need that much space and boost your performance even further. In a R6 rebuild your VMs will most likely be completely unusable, and take a long time if they are big drives.... in fact what raid6 attempts to prevent, it may actually induce - rebuilds overstress working drives, and often cause further failures.

If you are set on raid6 there are a number of tuning steps you can take: http://lmgtfy.com/?q=mdadm+tune+raid6
 
thanks, will go with raid10 and test that out. right now we are doing 4 1TBs plus 1TB hot swap.

if there's a recommendation for the filesystem to put on the RAID for best use in Proxmox as a "Directory" storage volume that'd be good to hear too! I was going to use ext3
 
ext4 on lvm is the default install from the iso.... lvm-thin adds some other handy features that are probably preferred.... if you decide on thin, look for that option in the storage gui when you add the directory.
 
Is the raid already synchronized? Please post

Code:
cat /proc/mdstat

And software raid using mdadm is not a supported configuration in Proxmox VE, if you want to use software RAID, there is ZFS to turn to. It's supported und you can directly install on it via the default Proxmox VE installer.
If you still want to use mdadm-based raid, you'll want to have thick or thin-LVM and no additional filesystem layer in between.
 
hi lnxbil.. when you say software raid isn't supported, does that mean for storage as well or just the OS? To clarify, I have my OS installed on it's own physical drive using the proxmox defaults. In the server I have 5 other physical drives that I want to join together to use for local storage.

What I am seeing is very very slow response both in installing a vm as well as running a vm (for example ubuntu server 14). It takes over an hour to do an ISO install of ubuntu and then the OS itself in the VM is incredibly slow.

I have since tested using non-raided (Linux filesystem w/ext4) and (software) raid10 storage and am having the same issues. This seems really strange as my older server (my current production server that I did not setup) is much much faster (can do an install in about 10-20 mins). It's almost like my new server doesn't have cache or something, it's very strange.

any ideas are welcome, thanks!
 
hi lnxbil.. when you say software raid isn't supported, does that mean for storage as well or just the OS? To clarify, I have my OS installed on it's own physical drive using the proxmox defaults. In the server I have 5 other physical drives that I want to join together to use for local storage.

Support as in support from Proxmox Company with subscription. It's Linux, so you can do whatever you like.

What I am seeing is very very slow response both in installing a vm as well as running a vm (for example ubuntu server 14). It takes over an hour to do an ISO install of ubuntu and then the OS itself in the VM is incredibly slow.

I have since tested using non-raided (Linux filesystem w/ext4) and (software) raid10 storage and am having the same issues. This seems really strange as my older server (my current production server that I did not setup) is much much faster (can do an install in about 10-20 mins). It's almost like my new server doesn't have cache or something, it's very strange.

any ideas are welcome, thanks!

Still waiting for the requested command output.
 
sorry, i didn't post that info as I know my raid is sync'd. it was made weeks ago. i have multiple nodes and have deleted the raid on a few of them to test, as I said above, different storage variations, so this speed issue I am seeing may not be linked to the software raid specifically.

however, here is the output
Code:
Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sdb1[0] sdf1[4](S) sde1[3] sdd1[2] sdc1[1]
      1953260544 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk
 
so just wanted to close the loop on this thread.. it turned out it was that the servers I was working on did not have the VM HW enabled in the BIOS. I had checked for it previously and didn't find it, but on a second pass it was discovered. The strange part is that the proxmox OS was reporting that the VM hardware was installed and enabled, so I didn't think the check the BIOS a second time..

also discovered the Option for a VM called "KVM Hardware virtualization" I had been disabling because it was preventing the VMs from running. Turns out disabling that setting allows proxmox itself to virtualize the hardware for a VM instead of using the Nodes hardware (somehow). This made my VMs run as if it was 1989.

So, problem solved and thanks for the tips on RAID6. I will be moving all my RAIDS to 10 now.
 
also discovered the Option for a VM called "KVM Hardware virtualization" I had been disabling because it was preventing the VMs from running. Turns out disabling that setting allows proxmox itself to virtualize the hardware for a VM instead of using the Nodes hardware (somehow). This made my VMs run as if it was 1989.

This is the hardware assisted virtualization. 1989 is a bit too far fetched, but the processor hardware virtualization was introduced in the early to mid 2000's, depending on the features and processor brand. The underlying technology is QEmu and it is capable of virtualizing any kind of hardware including other architectures.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!