Help with Configuration RAID1 + SSD

B

Bicka

Guest
Hello,

I am new to the forums but have been reading about Proxmox and testing it for a few months. Currently we are running Windows 2003 SBS on an old computer and we are finally replacing it. I thought it would be a total waste not to re-install on a hyper-visor for obvious reasons allowing multiple servers on the same hardware. I did some testing with Proxmox and found it to be quite promising, and it has received a lot of praise around the Internet too.
The new hardware:
1 x Intel® Xeon® Processor E3-1230 v3 (4C 8T, 8M Cache, 3.30 GHz)
2 x 8GB 1600 ECC RAM (16GB total)
ASUS P9D-M Intel chipset C224,LGA1150 Motherboard, Dual LAN, 4 Sata 6G
2 x 2TB Western Digital Re Enterprise, Sata 6G
1 x 256GB Samsung 840 Pro SSD (Not Enterprise)
All components in a server case with battery backed power supply, all components carry at least 5 year warranty.

I am at a loss to how to configure the storage. I would like to run the SSD to accelerate the system but I would like to ensure that I can setup a RAID1 on the HDDs and replicate back to those drives, also use them for large near-line content such as archived photos, application install files, system backups and file backups.

I would like to push for Terminal Services or at least Remote App to run our main database application, it is a legacy 16bit Access 2.0 app and it is awful. Creating multiple VMs on the same hardware was still slow, sharing the database file with samba. Hosted on FreeNAS and running from Windows XP VM was slow. All of the Databases are approximately 1.5GB total size so they would fit in a RAMDisk. Also, the system could sustain up to 30seconds of dataloss with no major issues, i.e. if the system failed without committing to disk a rollback to 30seconds prior would be adequate as the data entry is performed manually and we would just enter it again, we wouldn't lose any actual data.

I don't know if there are any storage options that would be able to access a share from all VMs on Proxmox and be fast, possibly iSCSI? (No idea).

Currently everything is installed on the SSD as I have no idea how to configure RAID-1 on the hard drives and allow SSD for cache. I'm assuming this will need to be done through Wheezy with a 3rd party application for software RAID. Otherwise i'll have to convince them to buy a Hardware RAID card.

If anyone has a similar setup or may be able to offer some advice, I'm happy to do the research and nut it out myself. I just haven't found much on the forums or the Internet similar to my configuration.

Thanks in Advance.
 
Currently everything is installed on the SSD as I have no idea how to configure RAID-1 on the hard drives and allow SSD for cache.

That's not a good idea at all - you will have data loss on a power outage or if the server crashes;
To avoid that you need to have a Raid-Controller with BBU and harddisk caches turned off;


Otherwise i'll have to convince them to buy a Hardware RAID card.

Yes, you should really convince them to buy a hardware Raid-Controller - Areca or LSI 3ware are my favorites;


And with two S-ATA drives in Raid-1 mode you will not be very happy with the io performance;
A Raid-10 with 4 drives is much better, also SAS 10k or 15k instead S-ATA drives improves io performance;
And for redundancy i would also install the os/boot disk at least on a Raid-1 volume;

But it depends what and how much VM's you will run - proper hardware sizing is a important point...
 
Thanks for all the suggestions Screenie.

That's not a good idea at all - you will have data loss on a power outage or if the server crashes;
To avoid that you need to have a Raid-Controller with BBU and harddisk caches turned off;
Is the data loss due to the unreliability of the SSD or is there something else to consider here? I'm confident to take my chances with the SSD especially if it is backed up regularly to the HDDs.
What would be perfect would be a journalling file system that stored changes and had the ability to roll back transactionally like SQL. Unfortunately the system is Access and that isn't going to change.
The System is resistant to data loss in the sense that we are manually entering data, a loss of the last 5minutes of data wouldn't really set us back much. There would be no unrecoverable data, we would just have to enter it in again. Battery UPS is tested and in a power outage the clients would be corrupting the data even if the server stayed powered.

Yes, you should really convince them to buy a hardware Raid-Controller - Areca or LSI 3ware are my favorites;
I have a Dell Perc H200 8 port SAS 6gb/s at home (LSI 9240-8i), I might be able to get a similar BBU one of those off ebay for ~$100 or something.

And with two S-ATA drives in Raid-1 mode you will not be very happy with the io performance;
A Raid-10 with 4 drives is much better, also SAS 10k or 15k instead S-ATA drives improves io performance;
Thats why I have the SSD for good IO and then I assumed it was possible to periodically flush the changes to the HDD in a sequential read, say every 60seconds or so. Similar to the operation of FancyCache on Windows. The HDDs are there as a larger Redundant storage space for slower content, backups, photos, application installs etc.

And for redundancy i would also install the os/boot disk at least on a Raid-1 volume;
I agree with this as I doubt there would much IO overhead in the OS (proxmox, VMs etc). I assume most of the IO will be generated from the Access database which I would even consider a RAMdisk for, but I doubt this would increase IO more than the SSD and would add yet another point of total faliure.

Also, the external backup routine will be a replication to another site, I know RAID is not a backup.
 
It is possible to do what you want to do on Proxmox 3.1 (with Debian Wheezy), using bcache and software raid, but I agree with screenie.

The process to get the SSD cache setup on top of a software-RAID 1 is not simple and if there is a drive problem you cannot boot to fix it (since it is software RAID). The combination is very new to Linux (within last year) and I do not believe it is mature enough for a small production environment yet.

You are better off finding a RAID controller with SSD caching. I suggest this LSI card with their Cachecade license:

http://www.amazon.com/MegaRAID-9271-8iCC-Pcie-Port-Sata/dp/B008ZIP0EU

I assume you are using Western Digital RE4 2TB disks, which are very nice but will be slow if you are running Microsoft Exchange on just two. You should grab at least 2 more of those as well.

This would add around $1200 to the cost of the box, with the $800 RAID card and two $200 RE4 2TB, but trust me it would be worth every penny.

I would not skimp on a 4 port RAID card, either. You will only save 20% of the cost with half the ports and keep in mind you could be using that card for the next 5-8 years (most cards can run forever). So, make sure to buy an 8 port card for future expansion or moving the RAID array to a new box easily.
 
An update,

Unfortunately I've gotten no closer to putting this system into production, but I have learnt a lot about linux and promox in the process.

My current storage situation is:
root@proxmox:~# pvscan
PV /dev/sdc1 VG hdd2 lvm2 [1.82 TiB / 1.33 TiB free]
PV /dev/sdb1 VG hdd1 lvm2 [1.82 TiB / 1.33 TiB free]
PV /dev/sda2 VG pve lvm2 [232.38 GiB / 16.00 GiB free]
Total: 3 [3.87 TiB] / in use: 3 [3.87 TiB] / in no VG: 0 [0 ]

root@proxmox:~# lvscan
ACTIVE '/dev/hdd2/lv_hdd2' [500.00 GiB] inherit
ACTIVE '/dev/hdd1/lv_hdd1' [500.00 GiB] inherit
ACTIVE '/dev/pve/swap' [15.00 GiB] inherit
ACTIVE '/dev/pve/root' [58.00 GiB] inherit
ACTIVE '/dev/pve/data' [143.39 GiB] inherit

1xSSD with Proxmox installed on it
2xHDD with LVM for images, 500GB ext3 for backups etc. (not mirrored by RAID, but setup identical, backups to be done to alternate drives).



I would like to run a Windows 8.1 32bit so I can run the 16bit application and RDP into it.

The performance within the application is horrible (I don't have numbers but it takes about 0.5seconds to render the page and the whole thing is jerky, CPU goes to 100%). Does anyone have a Terminal Server or Remote desktop running successfully on proxmox?

Is the performance issue related to CPU switching to run the 16bit app?

The CPU running to 100% has me thinking is there a WDDM windows driver that I need to install? is there a video card that proxmox presents to the system?

Thanks.
 
...
I would like to run a Windows 8.1 32bit so I can run the 16bit application and RDP into it.

The performance within the application is horrible (I don't have numbers but it takes about 0.5seconds to render the page and the whole thing is jerky, CPU goes to 100%). Does anyone have a Terminal Server or Remote desktop running successfully on proxmox?

Is the performance issue related to CPU switching to run the 16bit app?

The CPU running to 100% has me thinking is there a WDDM windows driver that I need to install? is there a video card that proxmox presents to the system?

Thanks.
Hi,
yes I have several Terminal-Servers running on Proxmox VE (all win2003 32bit).
Two of them are running on vmware before - and pve makes an good job (I don't meassuring, but there are no huge differences (with my configuration)).

Most problems are IO-related. The IO-Power of single sata-disks are normaly bad. How fast is your SSD? Have you tried to run the VM on the SSD?
For fast IO is an good raid-controller important (with 4 disks in raid 10 - better SAS as SATA-disks).

Udo
 
Thanks Udo,

I've made some progress, it seems my Quad Core w/HT was showing up as 8 cores which I passed to the guest as 'Host CPU' and QEMU64 CPU with 8 cores. When I changed it to 4 Cores it seemed to improve slightly.
My issue isn't IO. I can run crystal mark with very little difference between baremetal and proxmox.
Crystal Mark on Proxmox Guest Windows 8.1 32bit, image stored on 'Directory storage' local SSD.
-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]


Sequential Read : 306.751 MB/s
Sequential Write : 255.709 MB/s
Random Read 512KB : 266.410 MB/s
Random Write 512KB : 191.600 MB/s
Random Read 4KB (QD=1) : 19.830 MB/s [ 4841.2 IOPS]
Random Write 4KB (QD=1) : 30.320 MB/s [ 7402.2 IOPS]
Random Read 4KB (QD=32) : 22.220 MB/s [ 5424.8 IOPS]
Random Write 4KB (QD=32) : 40.047 MB/s [ 9777.2 IOPS]


Test : 50 MB [C: 54.2% (17.2/31.7 GB)] (x1)
Date : 2013/12/11 10:10:26
OS : Windows 8 Professional [6.2 Build 9200] (x86)



My current server is 2003 SBS and the application installed locally runs very fast with minimal lag.
Old server specs are Single 160GB sata 1 hard drive, 2GB RAM core2duo 2.0Ghz.... Its a PC not a server :(
crystal mark on current server (baremetal):
-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 (C) 2007-2013 hiyohiyo
Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]


Sequential Read : 45.308 MB/s
Sequential Write : 34.161 MB/s
Random Read 512KB : 26.049 MB/s
Random Write 512KB : 34.498 MB/s
Random Read 4KB (QD=1) : 0.439 MB/s [ 107.3 IOPS]
Random Write 4KB (QD=1) : 1.041 MB/s [ 254.2 IOPS]
Random Read 4KB (QD=32) : 0.547 MB/s [ 133.4 IOPS]
Random Write 4KB (QD=32) : 1.563 MB/s [ 381.5 IOPS]


Test : 50 MB [C: 76.7% (117.6/153.4 GB)] (x1)
Date : 2013/12/11 10:03:41
OS : Windows Server 2003 SP1 [5.2 Build 3790] (x86)


This is why I have no reason to think the performance is IO related.

I am running a 16bit app and I beleive there is some horrible emulation inefficiency somewhere that is causing the screen refreshing to lag everyone out.

Thanks in advance.
 
Testing with only 50 MB says nothing at all since the entire test file properly will never leave RAM. To be reliable you must use a test file at least twice the amount of RAM dedicated to the VM.
 
See this:
Code:
-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]


           Sequential Read :    92.223 MB/s
          Sequential Write :    75.145 MB/s
         Random Read 512KB :    78.317 MB/s
        Random Write 512KB :    41.879 MB/s
    Random Read 4KB (QD=1) :     3.937 MB/s [   961.2 IOPS]
   Random Write 4KB (QD=1) :     3.936 MB/s [   960.9 IOPS]
   Random Read 4KB (QD=32) :    42.818 MB/s [ 10453.5 IOPS]
  Random Write 4KB (QD=32) :    19.094 MB/s [  4661.7 IOPS]


  Test : 1000 MB [C: 73.9% (23.6/31.9 GB)] (x5)
  Date : 2013/12/11 0:33:26
    OS : Windows 7 Enterprise Edition SP1 [6.1 Build 7601] (x64)
  
-----------------------------------------------------------------------
CrystalDiskMark 3.0.2 x64 (C) 2007-2013 hiyohiyo
                           Crystal Dew World : http://crystalmark.info/
-----------------------------------------------------------------------
* MB/s = 1,000,000 byte/s [SATA/300 = 300,000,000 byte/s]


           Sequential Read :   100.023 MB/s
          Sequential Write :    81.285 MB/s
         Random Read 512KB :    92.931 MB/s
        Random Write 512KB :    76.780 MB/s
    Random Read 4KB (QD=1) :     6.705 MB/s [  1637.0 IOPS]
   Random Write 4KB (QD=1) :     7.551 MB/s [  1843.6 IOPS]
   Random Read 4KB (QD=32) :    43.839 MB/s [ 10702.8 IOPS]
  Random Write 4KB (QD=32) :    37.069 MB/s [  9050.0 IOPS]


  Test : 50 MB [C: 75.6% (24.1/31.9 GB)] (x1)
  Date : 2013/12/11 0:38:15
    OS : Windows 7 Enterprise Edition SP1 [6.1 Build 7601] (x64)
 
Testing with only 50 MB says nothing at all since the entire test file properly will never leave RAM. To be reliable you must use a test file at least twice the amount of RAM dedicated to the VM.

Sorry,

I didn't want to thrash the production server, but the VM with 5x1000MB has the following result:
Sequential Read : 271.265 MB/s
Sequential Write : 251.759 MB/s
Random Read 512KB : 243.855 MB/s
Random Write 512KB : 133.169 MB/s
Random Read 4KB (QD=1) : 13.386 MB/s [ 3268.0 IOPS]
Random Write 4KB (QD=1) : 8.134 MB/s [ 1985.8 IOPS]
Random Read 4KB (QD=32) : 14.868 MB/s [ 3629.9 IOPS]
Random Write 4KB (QD=32) : 9.441 MB/s [ 2305.0 IOPS]


Test : 1000 MB [C: 54.3% (17.2/31.7 GB)] (x5)
Date : 2013/12/11 10:35:36
OS : Windows 8 Professional [6.2 Build 9200] (x86)


This is with IDE mode qcow2 stored on 'local' which is after the proxmox default install onto the SSD (bad idea i know). I can re-test with virtIO and LVM but I'm confident this isn't causing my screen refresh slowdown.

I think the screen slowness may be caused by the GPU or lack thereof.
 
I finally figured out what was causing most of my lag. Believe it or not, the Windows 8.1 UAC was preventing my application from loading properly and it can't be turned off with the slider, I had to edit the Registry key.

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Curr entVersion\Policies\System
EnableLUAandmodify value to 0.


The other MAJOR fail on my part was the core count. I was assigning 8 cores to the VM (when I only have 4C8T) After I changed it to 4C still QEMU64 the performance was greatly improved.

Thanks to everyone for your help.
I'm still not getting baremetal performance, however I am going to try and resize the proxmox partitions on the SSD to allow me some room for LVM and I will switch the drivers to VirtIO.

My only concern now is the backup solution, as far as I know I can't backup LVM drives as they are not in a container like qcow2. My plan is to backup to alternate HDDs in case one fails or there is corruption (I don't like RAID1 as it will replicate the corruption onto both drives). The likelihood of corruptions is higher than the change of a hardware failure IMO.
 
An image on a LVM volume can be backed up using proxmox's default backup features. What you don't can with an image on a LVM volume is snapshots. This wil require qcow2 stored on a file system (directory, glusterfs or NFS) or ceph, sheepdog, or ZFS.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!