INTEL Modular BLADE Server Support/ Experiences

Petrus4

Member
Feb 18, 2009
249
0
16
We are planning to install proxmox-VE on an Intel Modular Server

Here are the specs for the Blades we will be using: (will be using 2 blades)

FlexFrame Server Compute Module - 2-way-ATI ES1000 - Gigabit Ethernet
(2x2) Intel Xeon L5420 2.5 12m 1333:45 nm Quad Core ; Thermal Design Power :50 W, Demand Based Switching, Execute Disable Bit
capability, Intel Virtualization Technology, Intel 64 Technology, Intel I/O Acceleration Technology, Enhanced Halt State (C1E), Intel
Thermal Monitor 2, 8GB RAM

Storage will be done on an integrated SAN. We are looking at going with the 2.5 SAS drive chasis (16 drives max) instead of the 3.5 SAS/SATA chasis (6 drives max) so that we have room to grow. Will probably be starting with 3 drives in RAID 5 + 1 hot spare to store guest images and 2 drives in raid 1 for the virtual environments.

Initially we will have two blades


The blades will have the following virtual installations:

Blade 1 (on the inside)

1. Windows Server 2003 PDC (Active Directory, dhcp, dns, IAS, fileserver for 60 users) (on KVM)
2. Windows Server 2003 BDC (filemaker Server 10, WSUS, dhcp, dns) (on KVM)
3. possibly 1 or 2 linux webapplication servers for intranet. (on openVZ)


Blade 2 (on DMZ) (all on openVZ)

1. Ubuntu Webserver with 8-10 low traffic websites (wordpress, joomla, static content)
2. Ubuntu Moodle server (+/-100 concurrent users)
3. Mailserver: Ubuntu, postfix, courier imap, squirrelmail, amavisd-new,Spamassassin, clamAV 450 users using imap (this may be too much for this one blade, so I may split out the smtp and virus spam filtering to another box)


Apart from a post here I could not find any info on this forum for this type server.

So here are my questions:

1. Is this hardware supported by Proxmox-VE?

2. Can anyone share experiences with installing proxmox-ve on a similar system.

3. Are there enough resources on each blade to allow the guest systems as listed above to function properly.

4. Any other suggestions / recommendations/ pitfalls to avoid.


thanks in advance everyone!
 
Last edited:
We plan to certify the new Intel Modular Server, as this is currently the best price/performance solution for most people.

Based on your specs, you are talking about the "old" one. I highly recommend to go for the new version with Intel 55xx processor. And I also recommend the 2.5 Zoll disks (14, not 16) - best price/performantion ratio is using 300gb drives, 10krpm.

Schedule:
We will have the first beta of the upcoming new storage model in August (earliest) and this is also the start date for certification process.

Currently we are looking for a sponsorship of a Modular Server for our test lab to provide best tested software to the community using Modular Servers. Martin also plans to providing video tutorials and best practices with Intel Modular Server.

If you start now with Modular Sever and you got issues, we will help to identify and fix it as soon as possible - but I do not expect troubles.

Raid5 is known to be not the fastest, think of Raid10.
 
We plan to certify the new Intel Modular Server, as this is currently the best price/performance solution for most people.

That is excellent news!! :p

on your specs, you are talking about the "old" one. I highly recommend to go for the new version with Intel 55xx processor. And I also recommend the 2.5 Zoll disks (14, not 16) - best price/performantion ratio is using 300gb drives, 10krpm.

What is the newest version of the Flex server and why do you recommend going with the 55xx processor as opposed to the 54xx?
(we are trying to go with a more "Green" processor)."

What is "Zoll" I googled it but could not come up with any "Zoll" drives.



:We will have the first beta of the upcoming new storage model in August (earliest) and this is also the start date for certification process.

Currently we are looking for a sponsorship of a Modular Server for our test lab to provide best tested software to the community using Modular Servers. Martin also plans to providing video tutorials and best practices with Intel Modular Server.

By "upcoming new Storage model" do you mean a ProxMox-VE version that will support storing guests on a SAN and allow for vmware "vmotion" type capabilities?

We would love to help sponsor a Modular server.. but lack the funds to do so. Sorry. :( Hopefully we could help with the certification process once we implement over here.

: you start now with Modular Sever and you got issues, we will help to identify and fix it as soon as possible - but I do not expect troubles.

Great! :p:p Thanks!

: Raid5 is known to be not the fastest, think of Raid10.

Yes I agree this would be better but we are considering the $$ vs storage space .. and RAID 5 may be more within our budget.

How would you suggest best configuring the drives.. would you recommend putting the ProxMox-VE system on a separate Raid-1 array and the virtual instances on the Raid-5 / Raid -10 array?

Thanks!
 
That is excellent news!! :p

What is the newest version of the Flex server and why do you recommend going with the 55xx processor as opposed to the 54xx?
(we are trying to go with a more "Green" processor)."

"Green" means you got the most from your energy input - and intel 55xx is the leader here if you go for KVM - for details see intel pages.

the brand new modular server blades go intel 55xx processors.

What is "Zoll" I googled it but could not come up with any "Zoll" drives.

sorry, German word for inch.

By "upcoming new Storage model" do you mean a ProxMox-VE version that will support storing guests on a SAN and allow for vmware "vmotion" type capabilities?

yes, at the end.

We would love to help sponsor a Modular server.. but lack the funds to do so. Sorry. :( Hopefully we could help with the certification process once we implement over here.

and you can submit testimonials and case study about your usage.

Great! :p:p Thanks!

Yes I agree this would be better but we are considering the $$ vs storage space .. and RAID 5 may be more within our budget.

How would you suggest best configuring the drives.. would you recommend putting the ProxMox-VE system on a separate Raid-1 array and the virtual instances on the Raid-5 / Raid -10 array?

Thanks!

depends on your needs. raid10 is fast and the best for the current Proxmox VE. upcoming version will be much more flexible and you can choose different storage types for each KVM VM.
 
Thanks Tom,

If I am needing about 2 TB of host and virtual guests space and another 2 TB of storage/ expansion space. What would be the best cost/performance configuration on these flex systems?

Also with the systems listed in my first post what would you recommend for sizing of the two blades: number of cpu's per blade, cpu type/speed, memory per blade to work well with Proxmox-ve
 
Thanks Tom,

If I am needing about 2 TB of host and virtual guests space and another 2 TB of storage/ expansion space. What would be the best cost/performance configuration on these flex systems?

AFAIK the biggest 2.5 drives you can get is 300 GB, so you need a full set (14 and also one cold spare).

About the RAID configuration, it depends.

Pls also consider using devices for KVM (/dev/sdx/) if you need full performance (and use raw instead of qcow2) - but you cannot backup this with vzdump. better would be LVM2 on this extra device with space for backups (and then we only need to adapt vzdump to work here).

the upcoming new storage model will help a lot in configuring (additional drives, LVM2, NFS, iSCSI) storage for KVM guests.

if you need live migration, you need to go for nfs or iscsi, but this is not needed as far as I understand your project right now.

For OpenVZ guest, nothing will change in the moment.

Also with the systems listed in my first post what would you recommend for sizing of the two blades: number of cpu's per blade, cpu type/speed, memory per blade to work well with Proxmox-ve

Always use 2 cpu´s: it makes no sense to spend a lot for dual socket boards and then only use 1 cpu (use the xeon 55xx series, you can go for the cheapest x55xx with quad core, maybe this one: BX80602E5520)

RAM: grab all you can get for your money as later its hard to upgrade.
 
Last edited:
But if using the 5520 intel nehalem processor, I recommend only using 6 memory bars for better performance. Each processor has 3 dedicated memory lines for better performance. adding more than 6 would slightly degrade the memory speed. But in most cases you might not even notice.

In other words, instead of 12 memory bars of 2GB each, I would put 6 memory bars of 4GB each.
 
Hi
The Intel Modular Server enables business in a box with seamless installation, migration, and growth capabilities. It has the ability to support up to six server compute modules, two hard disk drive options (six 3.5" SAS/SATA or fourteen SAS 2.5" hard disk drives), as well as two Ethernet Switch Modules, integrated SAN, and a management module. The Intel Modular Server is a flexible and powerful system for the small to mid size business.
 
Ok we are getting down to the details before we order our Intel Flex server.

Because of our budget we have to consider going with 3.5 inch disks instead of the 2.5 SAS or find very good reasons for spending $6000 more for a full set of 300 GB SAS disks.

13 SAS disks + hot spare = 3.9 TB at RAID 10 thats 1.95 TB

5 SATA + 1 hot spare = 5 TB at RAID 10 thats 2.5 TB

For our situation as described here I need some ideas which route to take with these disks. If the best choice is the 14 SAS disks then I need to have some really good arguments to persuade my business office and boss to go for this purchase. My concerns are

1. I will not have enough storage space with the SAS disks and
2. the increase performance of the SAS disks will not really be necessary for my situation.


Here are the specs so far:

25 Servies SAS drives

FlexFrame Server System 6U 25 Series (14 Drive Bay Model)
Two Server Compute Modules Installed - with 45nm - Low power consumpion Xeon CPU's
Storage Area Network: 4TB Shared Storage (raw+hot spare). FlexFrame integrated SAN, dynamically configurable storage volume
management system.
2 of 4 1KW Power Supply Units
2 -FlexFrame Server Compute Module (2 x CPU per Module)
4 Intel Quad-Core Xeon E5530 / 2.4 GHz - LGA1366 Socket - L3 8 MB
12 12GB PER Computer Module (12x2GB) 1333MHz, PC2-10600 Unbuffered x64 Non-ECC, 240 Pin
Storage System - (14 bay 4.2TB Maximum @ 300GB)
14 Seagate 300GB SAS 10k RPM 16MB 2.5IN
1 Sliding Rail Kit
Shared LUN Key:additional storage capabilities that allow advanced clustering and virtualization applications on the FlexFrame Server
System

35 Series SATA drives

FlexFrame Server 35 Series System 6U 6 Bay
Two Server Computer Modules Installed - with 45nm - Low power consumpion Xeon CPU's
Storage Area Network: 4TB Shared Storage (raw+hot spare). FlexFrame integrated SAN, dynamically configurable storage volume
management system.
2 of 4 1KW Power Supply Units
2 FlexFrame Server Compute Module (2x2CPU's)
4 Intel Quad-Core Xeon E5530 / 2.4 GHz - LGA1366 Socket - L3 8 MB
12 12GB PER Computer Module (12x2GB) 1333MHz, PC2-10600 Unbuffered x64 Non-ECC, 240 Pin
6 Storage: (5TB + 1 Hot Spare) Seagate Barracuda ES.2 - Hard drive - 1 TB - internal - 3.5" - SATA-300 - 7200 rpm - buffer: 32 MB (4TB
+ 1 Hot Spare)
1 Sliding Rail Kit
Shared LUN Key:additional storage capabilities that allow advanced clustering and virtualization applications on the FlexFrame Server
System


Does anyone have any tips or suggestions, strong arguments for SAS vs SATA etc. Thanks!
 
Last edited:
But if using the 5520 intel nehalem processor, I recommend only using 6 memory bars for better performance. Each processor has 3 dedicated memory lines for better performance. adding more than 6 would slightly degrade the memory speed. But in most cases you might not even notice.

In other words, instead of 12 memory bars of 2GB each, I would put 6 memory bars of 4GB each.

Thanks for the tip! Does this also go for the 5530 processors?
 
Yes it does.

Nothing prevents you from using all your memory slots but after discussing with the intel folks I learned that:

The 5500 series use 3 DEDICATED memory channels per processor.

Assuming you use 5530 and that they can perform at memory speed of 1333MHz. By using the 6 bars dedicated (three per processor) slots, you would be the full memory performance. If you go up to 12 slots your memory performance would drop to 1066MHz.

And if you go beyond 12 slots of memory your overall memory will run at 800MHz.

So at 6 memory slots you get full/actual memory speed. above 6 it drops to the next lower speed and beyond 12 it drops to the lowest speed. (I believe that the speeds supported by the 5500 series are 1333MHz, 1066MHz and 800MHz depending on the processor...)

I hope this helps. I just ordered a 5520 system and will post later my findings with proxmox.
 
Last edited:
Yes it does.

Nothing prevents you from using all your memory slots but after discussing with the intel folks I learned that:

The 5500 series use 3 DEDICATED memory channels per processor.

Assuming you use 5530 and that they can perform at memory speed of 1333MHz. By using the 6 bars dedicated (three per processor) slots, you would be the full memory performance. If you go up to 12 slots your memory performance would drop to 1066MHz.

And if you go beyond 12 slots of memory your overall memory will run at 800MHz.

So at 6 memory slots you get full/actual memory speed. above 6 it drops to the next lower speed and beyond 12 it drops to the lowest speed. (I believe that the speeds supported by the 5500 series are 1333MHz, 1066MHz and 800MHz depending on the processor...)

I hope this helps. I just ordered a 5520 system and will post later my findings with proxmox.

thanks hisaltesse, can you post the specs of your server here? Also if you have any tips on were to best buy these systems pls post also. Thanks! :p
You can send to me directly if posting here is not appropriate.
 
Will post the server specs soon.

Regarding memory modules it does not matter whether it is 2GB or 4GB. I explained what matters above.

As soon as we receive the server I will test proxmox and post the results here. It is only after confirming that everything works well that I will recommend where to get the system.
 
Will post the server specs soon.

Regarding memory modules it does not matter whether it is 2GB or 4GB. I explained what matters above.

As soon as we receive the server I will test proxmox and post the results here. It is only after confirming that everything works well that I will recommend where to get the system.

Yes I understood it's about using 6 modules instead of 8 or 12 etc.

Did you go with 14 SAS 300 GB drives or the 3.5 SATA's
 
Ok we finally got approval to purchase an intel modular blade server!

Should be shipped to us in the next two weeks and then I will be setting up proxmox-ve on it. Here are the specs: (we could only get 10 drives because of the cost. May add more if next years budget allows.)


INTEL FlexFrame Server System 6U 25 Series (14 Drive Bay Model)
Two Server Compute Modules Installed - with 45nm - Low power consumpion Xeon CPU's
Storage Area Network: 3TB Shared Storage. FlexFrame integrated SAN, dynamically configurable storage volume management system.
2 of 4 1KW Power Supply Units
2 INTEL -FlexFrame Server Compute Module (2 x CPU per Module)
4 Intel Quad-Core Xeon E5530 / 2.4 GHz - LGA1366 Socket - L3 8 MB
12GB PER Computer Module (12x2GB) 2 GB - DIMM 240-pin - DDR3 - 1333 MHz - CL9 - 1.5 V - registered - ECC
Storage System - (14 bay 4.2TB Maximum @ 300GB)
Actual: Performance Spec -3TB 10k RPM Raw
10 Seagate 300GB SAS 10k RPM 16MB 2.5IN
Shared LUN Key:additional storage capabilities that allow advanced clustering and virtualization applications on the FlexFrame Server

Please see the first post of this thread for info on what I want to put on the two blades in this system. I would like to get some best practice recommendations on how to setup the proxmox-ve system drives and kvm + openVz containers.

One idea is to put the proxmox-ve system partitions on mirrored drives and the openVZ containers and KVM instances on Raid 5 or Raid-10 array. Still trying to figure out how to best partition the SAN on this Device.


Tom & Martin, have you started the certification process for these Intel Modular servers?
 
Hi, Just some brief/general comments on the intel modular blade server gear:

I've used this kit at one site which ended up going with a basic VMWare esx 3.5i and while the intel modular server is 'nice' and 'very functional' - I'm not sure I believe it is a 'great fit' for new purchase for a virtualization platform.

My observations,

- the build-in management features (web-console for the modular blade server) are partially wasted due to inherent remote-management console features of any decent virtualization platform (ie, Proxmoxve - you really don't care to connect to the console of the physical host; admin is via web-management mostly and occasional SSH access otherwise for 'infrequent tricky admin work'.

- the intel modular server "built in SAN" effectively amounts to a shared LSI raid device for all 'blade modules' but unless your storage needs are quite modest on per-server basis, the constrained design of the modular chassis will limit your overall storage capacity.

- additionally, the "built-in SAN" is for management at the level of the intel web management layer; ultimately what you end up with is direct-attached storage mapped to each blade-server-module; this storage then is .. just disk .. basically -- you don't have support for desirable live-migration features (for example) which might be available with a "modern" storage architecture (ie, iSCSI target). Clearly you could use DRBD ProxVE to do live-migration support on this storage, but I think it is not the best fit / use for the "san feature" of the modular server

-- ie, in general, I think you would be better advised buying econo-level pizza-box 1u servers (dual quadcore or dual-six-core for example) with either traditional local HW raid for the OS/ProxMox instance (and also for DRBD on pairs of pizza box servers - for capacity of live migration) -- OR -- to have basic HW raid disks in your ProxVE pizzabox 1u servers, and use iSCSI storage target(s) as your shared storage iSCSI targets/ for live-migration capable KVM VMs on ProxVE.

Clearly if you don't want live migration capacity, then you are "OK" with direct-attach disk (either traditional 2-disk raids or whatever suits your taste - 4 disk raid10, etc) on your ProxVE host. But having live-migration capacity .. is kind of a good thing IMHO .. and maybe best not to exclude since you are doing a new hardware purchase ..

But .. really .. for the price-point of the intel modular servers .. you are paying for features that you ultimately are not going to get much benefit from (remote console, management of hardware, etc type bells and whistles) and will likely have storage capacity constraint

(ie, in keeping with your query, concern over SAS disk price per gig // or need for this performance ?)

So. Just my few cents thoughts on this.

In terms of determining your need, SAS vs SATA storage - maybe you should try to determine how IO bound your server function is / to help evaluate if you need the faster disk. I believe 'serious' ProxVE installs are 'recommended' to run on Raid1 or Raid10 SAS based disk; but I also know that I've deployed some production Raid1 SATA (1tb Seagate enterprise disk on 3ware HW raid) ProxVE systems, and the performance has been very respectable. So your needs .. may require testing to better understand your needs .. before deciding what you are best getting.

Hope this is of some slight help / use / interest,


Tim Chipman
Fortech I.T. Solutions
http://FortechITSolutions.ca
 
Wow fortechitsolutions thanks for the thorough review and recommendations.. but we had already already ordered the server.

You make very good points Tom in an earlier post on this thread recommends the intel modular flex server:

We plan to certify the new Intel Modular Server, as this is currently the best price/performance solution for most people.

Tom do you have any comments to make on fortechitsolutions post?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!