To SAN or not to SAN

Erwin123

Member
May 14, 2008
207
0
16
I'm researching the best way to start offering virtual services to our hostingclients and to virtualize our current servers for quicker disaster recovery.
I've learned a lot in the last couple of weeks but still have the feeling i'm only seeing part of the picture.
By posting this I hope to get some more views and ideas to investigate.
If this is not the approriate place for this I apologize and I hope the Mods will move this post to a more approriate section.

One of the great features of OpenVZ en Virtuozzo is the ability to do live migrations without a central storage.
The trade-off is that the containers are not as well separated from the coresystem as on XEN or VMware and one container can in theorie pull down a complete node.

With XEN and VMware you need SAN to use this great live migration feature.
The costs to setup a redundant iSCSI SAN (raid-5 or 6) are high. And you actually need two because, what good is a backup of the files if the MB or raidcontroller of your SAN died and you have to wait two days for a new one...
But since almost everyone seems to use them in combination with virtualization projects I must be missing some important points.

What are the advantages of SAN over local storage?
Why would anyone use SAN on OpenVZ and Virtuozzo systems?
 
I'm researching the best way to start offering virtual services to our hostingclients and to virtualize our current servers for quicker disaster recovery.
I've learned a lot in the last couple of weeks but still have the feeling i'm only seeing part of the picture.
By posting this I hope to get some more views and ideas to investigate.
If this is not the approriate place for this I apologize and I hope the Mods will move this post to a more approriate section.

One of the great features of OpenVZ en Virtuozzo is the ability to do live migrations without a central storage.
The trade-off is that the containers are not as well separated from the coresystem as on XEN or VMware and one container can in theorie pull down a complete node.

With openvz (or virtuozzo) you have best performance for hosting and best tools to manage resources (means the highest density). keep in mind, swsoft (now known as parallels) is the market leader here. vmware is the market leader in the enterprise market. xen based solutions are trying to go into the enterprise market, but til know vmware is doing the business here.

OS virtualization is the fastest for Linux Server.

With XEN and VMware you need SAN to use this great live migration feature.

The costs to setup a redundant iSCSI SAN (raid-5 or 6) are high. And you actually need two because, what good is a backup of the files if the MB or raidcontroller of your SAN died and you have to wait two days for a new one...
But since almost everyone seems to use them in combination with virtualization projects I must be missing some important points.

What are the advantages of SAN over local storage?
Why would anyone use SAN on OpenVZ and Virtuozzo systems?

Thanks for this question. With vmware, citrix xen, virtual iron (these three are biggest players doing full virtualization) you NEED a SAN otherwise a lot of features does not work. I know citrix xen server, they just NOT offer local storage (you need to go the commandline to enable it, very interesting).

Very important point considering vmware ESX: they do not use a Linux kernel and therefore they cannot use the integrated drivers: you always need vmware certified hardware/driver: for example, there are people around using vmware esx 3 certified hardware and now there is esx 3.5 not supporting their hardware anymore - very bad situation costs a lot of money - I have around 30 PC/Server in the lab for testing, but only 4 of them are able to run ESX. but all can run Linux :)

back to your question: SAN or local storage?

Local storage is ALWAYS faster than SAN (if you use the same controller/disks), especially if you use ISCSI - just think of the overhead and network limitations. if you use just on gbit NIC you just have around 90 Megabyte/second (you can do bonding to extend).

if you have for example 10 physical servers with 20 guests each with just one SAN you have 200 server instances connecting to one SAN. If you need performance, this is not the setup you want. But performance is not always the most important part.

For Proxmox VE we use here quite cheap adaptec raid controllers (3805) with batterie backup. This controller support SATA and SAS, 8 drives - I love RAID10, giving the best read/write performance. if you use 15krpm SAS drives you are very happy but also very cheap 24/7 SATA drives gives around 200 megabytes/second. adaptec have a new controller with improved raid6 performance, ADAPTEC 5805 SAS/SATA 8. this one looks even better.

set the performance results of one box with cheap 4 WD 500GB SATA drives in RAID10 on Proxmox VE:

Code:
proxmox-104:~# pveperf
CPU BOGOMIPS:      17027.60
REGEX/SECOND:      710075
HD SIZE:           94.49 GB (/dev/pve/root)
BUFFERED READS:    193.31 MB/sec
AVERAGE SEEK TIME: 9.05 ms
FSYNCS/SECOND:     1285.57
DNS EXT:           29.90 ms
DNS INT:           0.81 ms (proxmox.com)
proxmox-104:~#
If you want to play with SAN, just take a look on freenas and openfiler.

Final conclusion:

I would start with openVZ (or Proxmox VE) on local storage, maybe ISCSI. compared to others, OpenVZ servers have a lot less disk IO and this is in reality the most important factor for performance and guest density for most people, much more than the rest.

Backup:
Just use our vzdump to make a full backup or your openVZ host. we will extend vzdump to support remote storage. So you can make a schedule to make full backups to a central backup/server or storage - do not forget to do a tape backup of these backupfiles on the backup server.
 
Last edited:
Hi Tom,

Thank you for your quick answer, its all even better then I hoped for.
You guys keep saving me heaps of money this way ;).

To make the party complete I just found this paper from HP. Its testing and comparing XEN against OpenVZ under stress:
http://www.hpl.hp.com/techreports/2007/HPL-2007-59.pdf
It gets really interesting from page 5...
 
That paper is really interesting, and summarizes what we already know ;-) Full virtualization involves high overhead, and that is why we focus on OS level virtualization.

- Dietmar
 
The trade-off is that the containers are not as well separated from the coresystem as on XEN or VMware and one container can in theorie pull down a complete node.

All container resources are controlled by the kernel, so one container can never pull down a node.

I you are talking about bugs they can be anywhere, i.e. a bug in the full virtualization layer code can also make the whole system unusable.

- Dietmar
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!