Questions regarding Proxmox and Dell Power Edge VRTX

tavo

New Member
Dec 19, 2013
2
0
1
Hello,

I am currently using Proxmox 1.9 on a clone desktop (acting as a server for the company) and we are very happy if the functionality it offers. The time has come to acquire machines that will act as qualified servers and with it we would like to set up a high availabity enviroment.

So we set out to bid and one of the hardware providers has lent us a shared infrastructure server ( http://www.dell.com/us/business/p/poweredge-vrtx/pd) for us to carry out test to see if it compatible with proxmox 3.1. We managed to install two instance of proxmox that are to be added to the cluster but when we want to add a storage in the webui we cannot detect the virtual disk that is to be used as the shared storage. On a related note, we did a test where we installed openfiller on a different machine and were successful in adding a iscsi storage; in other words, when we added a storage of type iscsi and set the ip address to that of the openfiller server the proxmox as sucessful in finding a target.

The questions:

1. How likely is it that there exists an incompatability between proxmox 3.1 and the poweredge vrtx that is preventing us from detecting shared storage ? (This systems uses PERC8 to control the disk drives).

2. What should be the type of storage (LVM, NFS, iSCSI,..) that is to be used for SAN ?

3. What questions could I ask the manufacturer of the hardware to better understand if Proxmox is a suitable candidate for this architecture ?

Regards,

p.s: The concept of high availability is new to me so if i have said anything that doesnt make sense or if more clarification is needed feel free to correct me. And well, I appreciate the proxmox community and have the full intention of acquiring the due support once I have the ground firmly established.
 
Ok, so here's the deal. The organization I work for was looking into purchasing some VRTX servers from Dell. Dell jumped around in circles explaining and refuting the support of the VRTX server on Linux.
All the actual "blades" you can get (m520 for example) are in fact supported on Linux, only the chassis was actually new Once you get shared storage working (Shared PERC8), you're high-flying.

The short story: It's not supported. Your PERC8 will not work.

Don't worry, I've done all the hardwork for you and recompiled the kernel to support the VRTX server, and alleviate some other issues as well. HOWEVER, you would no longer be able to run containers, as openvz is not supported on any kernel version that the drivers required to run the shared storage would be able to be ported to. For that matter, openvz will not run on even a debian stable kernel, but I digress.

Sidenote to the proxmox devs, it might be time to step away from openvz, and towards lxc. Anywho...

When I get some time I'll upload a working version of the kernel, and possibly setup a repo / fork of the 3.12 longterm kernel source, until the newest stable of the kernel included with debian has the appropriate driver support (that probably won't be for some time).

Now, for your questions:

1.) Absolutely, with a stock kernel. As a matter of fact, no distribution (nor any kernel) supports Shared PERC8.

2.) I setup the SAN to use CEPH / RBD (RADOS Block Device) and segregate it into even chunks per node. I have more than one VRTX chassis running so this works well for me. (be aware however, the standard passthrough switch included in the chassis only supports 1gbps, even bonded pair thats 2gbps, so if you have any intentions of having HA, you should stress the importance of 10gbps NICs (yes, plural, one for each node contained within the chassis). This also means, you will need an appropriate switch to handle that sort of volume of traffic.

If you absolutely had to, you COULD bond 2 ports on the standard passthrough switch, but I would not recommend this.


3.) Ask them about whatever system requirements you currently have? I mean, the straightforward questions. Either way, I asked questions, they got me nowhere.

Yes, I do run these in a production environment. No, I don't think you'll have much luck finding someone else right now who does on linux.
Feel free to ask if you have anymore questions.
 
Last edited:
Perfect: didi you produce an installer with the kernel? Or better, do you have a link for downloading it?
My e-mail is diaolin AT diaolin DOT com

Tx, Diaolin
 
This is the compiled kernel in deb format: http://www.filedropper.com/linux-image-31217-vrtx31217-vrtx-1000customamd64

I also have all the source and an installer for re-compilation in the future, should there be more kernel related updates that need to be done. The reason it's compiled on 3.12 and not 3.14 (which I also have the vrtx version of 3.14 [hint hint, it's slightly faster on IO then 3.12 and below]) is because there were significant changes in the way that drivers for megaraid (and all IO) now function, that unfortunately, make it impossible to use dell appassure without re-coding all of the source.

Horrible download website, anywho, it'll hopefully get you where you need to go.
Let me know if you need anything else.

~xmo
 
Last edited:
No problem, I hope it helps.

You can install it after the normal proxmox install, or if you prefer, you can install it on top of debian, and pull in the repos from proxmox.
Either way should work just fine.

I install on-top of proxmox, as that has only the required packages for the hypervisor and nothing extra to remove after, and it's much easier / faster. You should be able to install it at any point after your distro installation, and be just fine.

Code:
dpkg -i linux-headers-3.12.17-vrtx_3.12.17-vrtx-10.00.Custom_amd64.deb
dpkg -i linux-image-3.12.17-vrtx_3.12.17-vrtx-10.00.Custom_amd64.deb

As simple as that. Let me know how it works out for you.

~xmo
 
Yes, the only source-code added during recompilation is the addition of the PERC8 storage driver, under megaraid_sas. The other issues I was referring to, are issues with the chassis KVM, and are simply precompile-time options set via make.

~xmo
 
Last edited:
Did the kernel work OK for you or has this been resolved, I am also looking at getting a VRTX and running proxmox on it?
 
In regards to your questions, I seem to be one of the only people currently discussing the use of these drivers. That being said, I have found them to be very stable in my current production environment since the installation of this kernel.

The big drawback is that the only way to run these drivers is on a newer version of the kernel not able to run openvz (with no foresight of that capability ever inherently being a possibility), and of course the fact that they are not as tested as what I would prefer for production.

Again, I have extensively tested these drivers, and would be willing to conduct / provide testing information if need be. Here are some (sanitized) screencaps of the current 2 chassis 4 node cluster running Ceph / RBD I'm using my VRTX servers for.

brfE2pp.png

MwBImQO.png
 
Hi, thanks for the info.

Does this mean you have to reinstall the kernel after every release of Proxmox?
Do you run any IO intensive applications like SQL, if so how does it perform?
 
I notice you use a special kernel 3.12. Did you have to apply the pve kernel patches to get it going?

Serge
 
I do not have to reinstall the kernel with each release of Proxmox, but yes the kernel did need patched to support the SPERC8 (shared storage array) NOT the local node storage.
Each node is connected to a 1 gbps line. The bridge between the two chassis however are 2x100 mbps (temporarily), so this probably has some effect on performance in my use case.

Some performance tests on one host node:
Code:
rados -p shared bench -b 4194304 60 write -t 32 --no-cleanup

Total time run:         60.923821
Total writes made:      1386
Write size:             4194304
Bandwidth (MB/sec):     90.999


Stddev Bandwidth:       20.2098
Max bandwidth (MB/sec): 136
Min bandwidth (MB/sec): 0
Average Latency:        1.40433
Stddev Latency:         1.03753
Max latency:            6.31692
Min latency:            0.243674
Code:
rados -p shared bench -b 4194304 60 seq -t 32 --no-cleanup

Total time run:        44.101218
Total reads made:     1386
Read size:            4194304
Bandwidth (MB/sec):    125.711


Average Latency:       1.01773
Max latency:           4.13289
Min latency:           0.03793

Testing on fresh 14.04 Ubuntu VM:
Code:
dd if=/dev/zero of=/tmp/output conv=fdatasync bs=384k count=1k; rm -f /tmp/output

1024+0 records in
1024+0 records out
402653184 bytes (403 MB) copied, 4.7321 s, 85.1 MB/s

Note: The raid array is configured in RAID 5. Ceph is configured for what would be the equivalent redundancy of RAID 1 on top of it. All the HDs installed are 5400k RPM Western Digital SAS drives
The array was also actively in use by some production VMs.

In a perfect world = RAID 10 10000k RPM SAS for ceph pool / RAID 10 SSD for ceph journal & RAID 0 SSD for ceph pool cache / bonded 2xgbps nics (or better) per node.

My big issue ATM isn't really stability (in the time these have been up, they have been through over 20 power outages and still kicking) it's mostly an issue of speed. That being said, I'm getting bad speed because I need to configure each LUN to let RBD handle redundancy, not RAID, and, my 2x100mbps connections between the chassis.

When I get reconfigured, I'll post some updated results.
 
I meant did you use the standard stock source for the 3.12 kernel with only the SPERC8 patches or did you also add the Proxmox specific server patches?

Serge
 
I used the standard kernel without the proxmox patches. If I get time at some point I might add in the firmware patches, but considering the stability right now, I don't see that as necessary.
 
Anyone have a link for download the kernel guys? Or you have a new way for make this vrtx storage visible for my proxmox 3.3?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!