INTEL Modular BLADE Server Support/ Experiences

Hi, Just some brief/general comments on the intel modular blade server gear:

I've used this kit at one site which ended up going with a basic VMWare esx 3.5i and while the intel modular server is 'nice' and 'very functional' - I'm not sure I believe it is a 'great fit' for new purchase for a virtualization platform.

My observations,

- the build-in management features (web-console for the modular blade server) are partially wasted due to inherent remote-management console features of any decent virtualization platform (ie, Proxmoxve - you really don't care to connect to the console of the physical host; admin is via web-management mostly and occasional SSH access otherwise for 'infrequent tricky admin work'.

- the intel modular server "built in SAN" effectively amounts to a shared LSI raid device for all 'blade modules' but unless your storage needs are quite modest on per-server basis, the constrained design of the modular chassis will limit your overall storage capacity.

- additionally, the "built-in SAN" is for management at the level of the intel web management layer; ultimately what you end up with is direct-attached storage mapped to each blade-server-module; this storage then is .. just disk .. basically -- you don't have support for desirable live-migration features (for example) which might be available with a "modern" storage architecture (ie, iSCSI target). Clearly you could use DRBD ProxVE to do live-migration support on this storage, but I think it is not the best fit / use for the "san feature" of the modular server

-- ie, in general, I think you would be better advised buying econo-level pizza-box 1u servers (dual quadcore or dual-six-core for example) with either traditional local HW raid for the OS/ProxMox instance (and also for DRBD on pairs of pizza box servers - for capacity of live migration) -- OR -- to have basic HW raid disks in your ProxVE pizzabox 1u servers, and use iSCSI storage target(s) as your shared storage iSCSI targets/ for live-migration capable KVM VMs on ProxVE.

Clearly if you don't want live migration capacity, then you are "OK" with direct-attach disk (either traditional 2-disk raids or whatever suits your taste - 4 disk raid10, etc) on your ProxVE host. But having live-migration capacity .. is kind of a good thing IMHO .. and maybe best not to exclude since you are doing a new hardware purchase ..

But .. really .. for the price-point of the intel modular servers .. you are paying for features that you ultimately are not going to get much benefit from (remote console, management of hardware, etc type bells and whistles) and will likely have storage capacity constraint

(ie, in keeping with your query, concern over SAS disk price per gig // or need for this performance ?)

So. Just my few cents thoughts on this.

In terms of determining your need, SAS vs SATA storage - maybe you should try to determine how IO bound your server function is / to help evaluate if you need the faster disk. I believe 'serious' ProxVE installs are 'recommended' to run on Raid1 or Raid10 SAS based disk; but I also know that I've deployed some production Raid1 SATA (1tb Seagate enterprise disk on 3ware HW raid) ProxVE systems, and the performance has been very respectable. So your needs .. may require testing to better understand your needs .. before deciding what you are best getting.

Hope this is of some slight help / use / interest,


Tim Chipman
Fortech I.T. Solutions
http://FortechITSolutions.ca

Thanks for this analysis, I can agree. It shows clearly that everyone should take a deep look on the needed features before buying a server or designing a bigger implementation. But that's the nice thing with Proxmox VE, you have all possibilities.

But back to the modular server: its a nice package and if you need most of the offered features its quite a good deal. if you need a lot of storage, you have to attach another storage box, but this is also available and certified from Intel.
 
Thanks for this analysis, I can agree. It shows clearly that everyone should take a deep look on the needed features before buying a server or designing a bigger implementation. But that's the nice thing with Proxmox VE, you have all possibilities.

But back to the modular server: its a nice package and if you need most of the offered features its quite a good deal. if you need a lot of storage, you have to attach another storage box, but this is also available and certified from Intel.

Yes we have looked the storage boxes also as well as just using iSCSI SAN/NAS
in combo with this server.

Tom - how far are you guys with certifying the Intel Flex server for proxmox and is there anything I can do to help?
 
Yes we have looked the storage boxes also as well as just using iSCSI SAN/NAS
in combo with this server.

Tom - how far are you guys with certifying the Intel Flex server for proxmox and is there anything I can do to help?

we have still no modular server here in our lab for testing.
 
- the intel modular server "built in SAN" effectively amounts to a shared LSI raid device for all 'blade modules' but unless your storage needs are quite modest on per-server basis, the constrained design of the modular chassis will limit your overall storage capacity.

- additionally, the "built-in SAN" is for management at the level of the intel web management layer; ultimately what you end up with is direct-attached storage mapped to each blade-server-module; this storage then is .. just disk .. basically -- you don't have support for desirable live-migration features (for example) which might be available with a "modern" storage architecture (ie, iSCSI target). Clearly you could use DRBD ProxVE to do live-migration support on this storage, but I think it is not the best fit / use for the "san feature" of the modular server
http://FortechITSolutions.ca
This is actually incorrect, the unit I have on my desk support full shared storage, plus drive cloning.

I've had live live-migration working with xenserver, worked a treat. (just costs to much and I wanted KVM)

The storage controller has a miniSAS connector which I have plugged an external 16 x 3.5" drive chassis. If I was to purchase the second storage controller I can create a full redundant external storage solution.

This unit for the cost I paid is just amazing. No 4 x 1RU system can match it.

Cheers
Mike
 
Hi Tom

I think we can help with a system for validation - most likely a loaner though. I will contact you directly.

Rich
 
This is actually incorrect, the unit I have on my desk support full shared storage, plus drive cloning.

I've had live live-migration working with xenserver, worked a treat. (just costs to much and I wanted KVM)

The storage controller has a miniSAS connector which I have plugged an external 16 x 3.5" drive chassis. If I was to purchase the second storage controller I can create a full redundant external storage solution.

This unit for the cost I paid is just amazing. No 4 x 1RU system can match it.

Cheers
Mike

Thanks Mike, that gives me a little more confidence in my choice :p

which brand chasis are using with the intel flex server? Also would you be willing to share your configuration and what you are running on your server?
Do you have all 14 SAS drives and have them configured with RAID?

I am thinking of documenting my process with Proxmox-ve and the INTEL FlexFrame Server System 6U 25 Series (14 Drive Bay Model), problems I run into etc. Perhaps you are willing to share some of the issues your ran into and how you resolved them?
 
Perhaps you are willing to share some of the issues your ran into and how you resolved them?

I have the following:
1 x MFSYS25 (http://www.intel.com/Products/Serve...r-Chassis/modular-server-Chassis-overview.htm)
4 x MFS5520VI (http://www.intel.com/Products/Serve...ute-Module/Server-Compute-Module-overview.htm)
8 x Intel E5504 Quad Xeon 2.00GHz (2 CPU's per blade)
24 x Kingston 2GB 1333MHz DDR3 ECC (12Gb per blade)
7 x Seagate 500GB 2.5"SATA 7200RPM
7 x Seagate Savvio 147GB 2.5" SAS
1 x FSLUNKEY (http://www.intel.com/products/server/modularserver/shared_lun/index.htm)
1 x MFSLUNCOPY (http://www.intel.com/products/server/modularserver/lun_copier/index.htm)
1 x DS-1640 (Norco Product which is only in beta release)

A few notes:
* IE6 is the only IE you should use as the disk manager web pages does not work on anything newer
* Firefox any platform I have tried works well
* I have only been able to get the remote kvm/disk manage to work on windows. My Mac have not been upgraded to snow leopard. The kvm needs java 6.
* I've upgraded proxmox to the latest including testing, I did this after having issues with migration of multicpu guests. Centos works but anything Debian does not.
* Shared LUN works a treat
* I did have once instance where two guest images where editing the same storage. This happen when I created a guest on two different hosts and started installing on both at the same time. I think there is a locking issue in proxmox.

Hope this helps
Mike
 
Last edited:
The storage controller has a miniSAS connector which I have plugged an external 16 x 3.5" drive chassis.


Hi Mike

which brand drive chasis are your using, what are your specs and how do you like this solution in terms of working well with Proxmox-ve?
 
It's a Norco DS-1640, its a beta product.

It seems to work nicely, and it looks no difference than any other storage when used with proxmox.

Its my understanding that Intel will release a new storage controller this year which support the new 6G SAS standard. The chassis will not change but a card will be needed for older comput modules. There is also talk of a 10Gig switch upgrade.

This Modular system is amazing, for the price.

Just wish that KVM was more stable than it is.

Mike
 
Just wish that KVM was more stable than it is.

Mike

are you finding KVM to be not a stable virtualization technology? I intend to run Server 2003 domain controller as a KVM instance. Why do you say you wish it were more stable, what stability issues are you running into?
 
are you finding KVM to be not a stable virtualization technology? I intend to run Server 2003 domain controller as a KVM instance. Why do you say you wish it were more stable, what stability issues are you running into?

I'm talking about the fact that KVM is under heavy development, this causes bugs to come creep in too working code.

http://www.proxmox.com/forum/showthread.php?t=2764
http://www.proxmox.com/forum/showthread.php?t=2700

You have to test the system for your particular requirements and once you happy then stick to it until you have done a full test on any upgrades.

Read the history about the upgrade to KVM 2.6.31.? and how a bug was found which was fixed in newer code. Problem is that the newer code has created its own problems.

I'm not complaining about Proxmox, they are doing a great job.
 
I'm talking about the fact that KVM is under heavy development, this causes bugs to come creep in too working code.

http://www.proxmox.com/forum/showthread.php?t=2764
http://www.proxmox.com/forum/showthread.php?t=2700

You have to test the system for your particular requirements and once you happy then stick to it until you have done a full test on any upgrades.

Read the history about the upgrade to KVM 2.6.31.? and how a bug was found which was fixed in newer code. Problem is that the newer code has created its own problems.

I'm not complaining about Proxmox, they are doing a great job.

Ok understood.. this is what I have been doing with my openVZ instances first test any proxmox-ve upgrades on a test server and then upgrade the production servers. thanks Mike.
 
Hi Tom

I think we can help with a system for validation - most likely a loaner though. I will contact you directly.

Rich

Tom are you guys testing the modular server yet? I am about ready to start setting up our flex server and moving all our physical and virtual servers to it.
Will post soon with some of my ideas on how I want to do this so I can get some feedback on my proposed configurations.

Initially I want to create one storage pool out of 2 x 300 GB SAS drives in RAID1 with 2 virtual drives on it. I have two compute modules so each compute module will be assigned to one of the virtual drives. Then I will install Proxmox-ve on each compute module. After this I will have 8 x 300 GB SAS drives of raw storage left to use. I am not sure how I will configure these yet will post back later with some ideas.
 
Tom are you guys testing the modular server yet? I am about ready to start setting up our flex server and moving all our physical and virtual servers to it.
Will post soon with some of my ideas on how I want to do this so I can get some feedback on my proposed configurations.

Initially I want to create one storage pool out of 2 x 300 GB SAS drives in RAID1 with 2 virtual drives on it. I have two compute modules so each compute module will be assigned to one of the virtual drives. Then I will install Proxmox-ve on each compute module. After this I will have 8 x 300 GB SAS drives of raw storage left to use. I am not sure how I will configure these yet will post back later with some ideas.

the testing equipment should arrive here end of this month.

storage: depends on many factors, see also the recommendation on the intel modular web site.

Proxmox VE: basically assign the 8x300 (raid10) volume to both modular servers and then the following: http://pve.proxmox.com/wiki/Storage_Model#Use_local_devices_.28like_.2Fdev.2Fsd...2F.2C_FC.29
 
the testing equipment should arrive here end of this month.

storage: depends on many factors, see also the recommendation on the intel modular web site.

Proxmox VE: basically assign the 8x300 (raid10) volume to both modular servers and then the following: http://pve.proxmox.com/wiki/Storage_Model#Use_local_devices_.28like_.2Fdev.2Fsd...2F.2C_FC.29

Thanks Tom.
Great you are getting the modular server soon!

So as I understand from your post, (assuming I have a total 10 300GB SAS drives at my disposal) you think it is a good idea to install proxmox-ve environment on a Raid 1 storage pool (2 x 300GB SAS) and then store the virtual machines on a Raid 10 storage pool (8 x 300GB SAS)? (I know I can also create a separate storage pool(s) for databases etc.)

As for live migration. I will have another separate server to move virtual machines to in case of maintenance or failure. Is KVM still only support with the proxemox-ve storage model? If so how can I still have live migration with OpenVZ (if at all possible) on this flex server?

Thanks-- Petrus
 
Thanks Tom.
Great you are getting the modular server soon!

So as I understand from your post, (assuming I have a total 10 300GB SAS drives at my disposal) you think it is a good idea to install proxmox-ve environment on a Raid 1 storage pool (2 x 300GB SAS) and then store the virtual machines on a Raid 10 storage pool (8 x 300GB SAS)? (I know I can also create a separate storage pool(s) for databases etc.)

depends on the load, servertapes, KVM and/or OpenVZ. you should plan carefully.

As for live migration. I will have another separate server to move virtual machines to in case of maintenance or failure. Is KVM still only support with the proxemox-ve storage model? If so how can I still have live migration with OpenVZ (if at all possible) on this flex server?

Thanks-- Petrus

the describe way allows live migration of KVM machines between modular server compute modules. you cannot migrate to other servers as they have no access to the modular server storage. if you do not want this inflexibility you need to go for iSCSI with all the configuration issues.

OpenVZ: no change.
 
depends on the load, servertypes, KVM and/or OpenVZ. you should plan carefully.

I will have two compute modules


Module 1 (on the inside) Proxmox-VE-INSIDE-Master

1. Windows Server 2003 PDC (Active Directory, dhcp, dns, IAS, fileserver for 60 users) (on KVM)
2. Windows Server 2003 BDC (filemaker Server 10 -10 users, WSUS, dhcp, dns) (on KVM)
3. possibly 1 or 2 linux webapplication servers for intranet. (on openVZ)


Module 2 (DMZ) All openVZ Proxmox-VE-DMZ-Master

1. Ubuntu Webserver with 8-10 low traffic websites (wordpress, joomla, static content)
2. Ubuntu Moodle server (+/-100 concurrent users)
3. Mailserver: Ubuntu, postfix, courier imap, squirrelmail, amavisd-new,Spamassassin, clamAV 450 users using imap (this may be too much for this one blade, so I may split out the smtp and virus spam filtering to another box)

Do you have any recommendations for the above?

the described way allows live migration of KVM machines between modular server compute modules. you cannot migrate to other servers as they have no access to the modular server storage. if you do not want this inflexibility you need to go for iSCSI with all the configuration issues.

OpenVZ: no change.

So the only way to have live migration work between the intel flex modules and and an external server is to configure all 10 SAS drives in single storage pool and then create two virtual drives each containing a Proxmox-VE install with local storage for the VM's ~OR~ configure iSCSI (which is not fully supported yet?)

Can you point me to some documentation about iSCSI configuration and the "issues" with using this? Are these "issues" specific to configuring iSCSI with ProxMox-VE?

Petrus
 
Hi Tom or others who might be able to point me in the right direction on this.

The company we purchased the server from is thinking of writing a white paper with me on the installation of proxmox-ve on the intel flex server.

I could really use some help or direction to resources on how to best configure this system as much of the documentation for LUN configuration for the flex server is based on physical servers and splitting out databases , OS's and log files on different LUN's but not on installing a virtual environment on this system.

I currently only have two compute modules. One module I intend to use on my LAN the other on the DMZ for web servers and a email server.

As I understand it is better for security to keep LAN virtual machines on a separate proxmox installation from the DMZ virtual machines. Am I correct in this assumption?

I think ideally I would have one more compute module to function as a failover server in case one of the other compute modules fail, but we do not have money in our budget for this until next year. (I am assuming this 3rd compute module would have to be of the same specifications as the other two modules in order to work as a failover?)

I was planning to be able to migrate my kvm and openvz systems to two external servers which I want to cluster with the proxmox-V-environments on the compute modules. The live migration would be for server maintenance and in case a compute module goes down. I would then be able to have the virtual environment run on one of the external servers and still access the storage module.

As I understand it this would only be possible by using iSCSI for an external server to access the storage module on the flex server. Is this correct?

In this scenario I would than have to bond at least two gigabit NICS to get enough throughput in order for the virtual machines to work well. ~Is this correct?

~~~~~~~~
Alternative:

The alternative would be to cluster the two compute modules and mix dmz virtual servers with Lan virtual servers. I would then be able to easily live migrate virtual machines from one module to the other with out the need to configure iSCSI. The question here is, how would I be able to keep the lan machines secure from the dmz machines?


Finally: From what I have read only KVM virtual machines can be live migrated when using the integrated SAN on the flex server. For openVZ containers I would have to store these on the same LUN as the proxmox-virtual environment is installed on.


(Will it ever be possible to store openVZ containers on a SAN?)

I know these are lots of questions and help will be greatly appreciated. I hope to give back to this project by documenting my installation and adding to the wiki.

Thanks in advance!

Petrus.
 
As far as I know our Intel Modular Server should arrive soon in the test lab, means the intense testing will start soon, hopefully.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!