Need help with system specification

mla

New Member
Aug 20, 2014
4
0
1
We're in the process of virtualizing our web server environment and could use some help in deciding what machines to buy.
We just purchased a ProxMox VE subscription :)

Here's our situation:

We run a medium-sized web site. Everything is co-located at an ISP so the more we can do remotely, the better.
We're not going to worry about our database servers currently.

So that means we mainly need to virtualize our web and email servers, and that's about it.
I'm mainly looking to ProxMox VE to make it simpler to add new physical hosts and to upgrade VMs.
We only run Linux and am planning to use OpenVZ containers (but am open to suggestions).

We currently have 4 aging rack servers running web and mail (a couple 2U and a couple 1U).
All of them should really be replaced soon.

So, I'm looking at what to buy. I picked up Ahmed's Mastering Proxmox. I'm leaning toward buying "branded"
machines just because I don't have the time to deal with building/configuring. Our database servers
are from Dell and we have 4-hour support level with them, so my leaning is to stick with Dell, all else being equal.

Since we're not intending to run a bunch of VMs on each physical node, it seems like we don't need *that* much
ram and disk.

In looking at Dell's options, I'm thinking the R620s look reasonable. As a first stab:


  • Dual CPU, Intel® Xeon® E5-2620 2.00GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C 95W (which is what Ahmed recommended in "advanced" setup)
  • Chassis with up to 4 Hard Drives and 2 PCIe Slots
  • 16GB ram (advanced ECC)
  • PERC H310 Integrated RAID Controller
  • RAID 1 for H710P/H710/H310
  • 2 SSDs, 200GB Solid State Drive SAS Value SLC 6Gbps 2.5in Hot-plug Drive
  • Dual, Hot-plug, Redundant Power Supply (1+1), 495W
  • iDrac 7 enterprise

Dell is showing that config at $USD 5,710.97

Suggestions? Is it safe to go SSD at this point and will 2 SSD in RAID 1 be pretty robust?

And then I still need to think about shared storage...

Thanks for any pointers.

Maurice
 
so my leaning is to stick with Dell,
Hi. Dell sounds pretty good.I've done a couple of installations on Dell 720xd (where xd means extra disk space, you have more HDD bays).

Confirmed working DRAC7 for HA for both fence_idrac and fence_drac5 modules. I found three servers is minimum if you plan to have database cluster or ceph storage otherwise two servers.Used shared storage NFS (direct attached MD1200), DRBD over 10Gb Ethernet and GlusterFS.

With OpenVZ there was an issue where backup takes very long time if too many small files on CT disk.
Proxmox backup for openvz runs find, which takes time to process all small files. That was resolved since we migrated that container to kvm VM.
Now backup runs faster because of snapshot.

It depends on how you going to use SSD. What for? I think no reason to have ssd for OS root or for VM rather then if you dedicate it to database storage.
Perhaps for ceph object storage or ZFS zpool cache? I might be wrong but in my opinion SSD in RAID1 is wasting money.

Cheers.
 
Last edited:
Hi. Dell sounds pretty good.I've done a couple of installations on Dell 720xd (where xd means extra disk space, you have more HDD bays).

Confirmed working DRAC7 for HA for both fence_idrac and fence_drac5 modules. I found three servers is minimum if you plan to have database cluster or ceph storage otherwise two servers.Used shared storage NFS (direct attached MD1200), DRBD over 10Gb Ethernet and GlusterFS.

With OpenVZ there was an issue where backup takes very long time if too many small files on CT disk.
Proxmox backup for openvz runs find, which takes time to process all small files. That was resolved since we migrated that container to kvm VM.
Now backup runs faster because of snapshot.

It depends on how you going to use SSD. What for? I think no reason to have ssd for OS root or for VM rather then if you dedicate it to database storage.
Perhaps for ceph object storage or ZFS zpool cache? I might be wrong but in my opinion SSD in RAID1 is wasting money.

Cheers.

Thanks! Okay, a few follow-up questions.

So with our environment, most of the VMs won't really need to be backed up regularly. I mean, we'll need a template for the VMs, of course, but they will just be running as web servers, so everything of importance will either be in the database or on NFS. The one exception, I guess, is our mail server. All it does is send mail for the web servers, but I guess I'd rather not lose any data there that's queued.

Anyway, the question being, do you think we need HA and shared storage? Is it possible to migrate VMs easily between physical nodes without it? We won't need "live" migration. We'll just need to be able to update a VM (with latest patches, etc.) and then deploy it across the physical nodes. Is a cluster overkill for such an environment?

Regarding the SSD, I was mainly interested in fault-tolerance. Think SAS disks are likely as robust? There shouldn't be a lot of disk activity, I would hope. Mainly web server logging.
 
Anyway, the question being, do you think we need HA and shared storage?
It's up to you and depends on your architecture. Shared storage is such an easy thing to implement. Why not to do that? At least for manual migration, maintenance or load balancing.You can implement HA just for web servers using heartbeat, haproxy or nginx. HA in Proxmox can be done easily for all server you chosen. eg. always make sure frontend nginxes are up and running. I would recommend to play with these technologies first and always have backups and plan to rollback.
Is it possible to migrate VMs easily between physical nodes without it? We won't need "live" migration. We'll just need to be able to update a VM (with latest patches, etc.) and then deploy it across the physical nodes.
Yes. By default VM migrating via ssh: eg. copy disk image, memory dump, stop and start the machine on new host. You not need to have shared storage to do that.
Is a cluster overkill for such an environment?
Proxmox nodes required cluster to rule vm migrations via WebUI.
Regarding the SSD, I was mainly interested in fault-tolerance. Think SAS disks are likely as robust? There shouldn't be a lot of disk activity, I would hope. Mainly web server logging.
SAS disks given by Dell fit well and if compared with SATA has better load performance for the same price. I think if you are not after shared storage you can manage speed and fault-tolerance around RAID technologies. The other alternative might be ZFS.
 
Last edited:
ok, thanks again. Based on your comments, here's an updated system:


  • Dual CPU, Intel® Xeon® E5-2620 2.00GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C 95W
  • Chassis with up to 4 Hard Drives and 2 PCIe Slots
  • 32GB ram (advanced ECC)
  • PERC H310 Integrated RAID Controller
  • RAID 1 for H710P/H710/H310
  • 2 x 300GB 15K RPM SAS 6Gbps 2.5in Hot-plug Hard Drive
  • Dual, Hot-plug, Redundant Power Supply (1+1), 495W
  • iDrac 7 enterprise

I increased RAM from 16gb to 32 (+$287)
Dropped the SSD and went with 2 SAS disks in Raid 1. I'd probably buy some spare disks to keep around.
Going with the iDrac enterprise for remote SSH ability, which I hear isn't available in Express.

Total cost: USD $4,430

What would you change?

Also, our network is currently 1gb. I see there's a bunch of 10gb SFP+ adapters listed as options, which I have no idea about. Is 10gb becoming common and would that require that I swap out all our existing 1gb stuff, or it's backward compatible? I'm sure we don't actually need 10gb yet, but just wonder if maybe I should future-proof us a bit by getting it on new machines or not.

Also, if anyone has recommendations for ProxMox-savvy folks in Seattle who are capable of maintaining our network on a consulting basis (or whatever), I would be interested.
I'm a bit swamped :(

Thanks,

Maurice
 
If you opt for 10 gb nics you also need to have 10 gb switches otherwise you will not see any performance gains. 10 gb nics is backwards compatible with 1 gb nics.
 
Hi Maurice

I'll try to give you my opinion based on the experience of installations we have done for virtualized environments.

1 - To work at (or without) virtualization first thing you need is to run a project on paper that you have and expose that needs growth forecasts in the short and medium term.

I mean if you're gonna run 5 servers with web stores or not, you're going to run operating systems that capacity database need lan card number to use etc.
this a bit to condition the need to acquire physical machines.

an example
5 VM with 2cores machines each with 1 gb of ram and a capacity of 100gb disk should not be mounted on a 6 core and you would need 10 cores for VM and expose the host computer to potential work problems. Now if we talk about the teams even email would be more chaotic setting. Another important thing you need for the NICS will have LAN connectivity that is also expandable value if 0 use 10Gb 1Gb reality is that if your supplier has 10G LAN perfect place if lan 10g will not solve life better invest in something productive such as storage or ram.

2-When we speak of raising virtual we will put "all eggs on one basket" with what we should think will happen if the basket falls. (Safe omelet)

Generally you should buy 2 machines to ensure the safety of the environment and place in cluster and evaluate the kind of pussy that will be critical if the system and place them in HA required. This will ensure us security so minimum 2 teams if common sense is not now possible to make a plan to mount the short-term environment

3-discs on machines or shared storage?

Without hesitation shared storage, SAN / NAS QNAP type, SYNOLOGIC, EMC etc that gives us ISCSI environment that allows us to keep a few different types of storage on SATA and another 15K or 10K SAS also preferred different types of Raid RAID 10 and with how much better Cache for more transactions. SSD or SAS drives without a better performance dudad 15k SAS RAID SSDs as experiences

4 - If I were you size your system I would choose HP AMD or INTEL instead of DELL do not say that dell is bad or another manufacturer but in systems with HP we have achieved better performance at both microprocessor with AMD and cache the raid cards with other manufacturers with identical characteristics.
Look at one HP360e (similar to Dell and AMD HP365p recommended a luxury van Proxmox) and RAID card without hesitation with FWBC or BBC to have WRITE CACHE


to conclude I would recommend 2 hp 360e or 385p requirement depends on at least 8 cores if it is possible 12 o 16 cores 12 gb ram and minimum perfect from 16 o 24gb with at least 4 NICS 1GB expandable to 10Gb or more NICS sure you need this covered and secured future expansion,

Cabin least one QNAP TS-879U-RP has 8 bays allows 10G is only SATA 3
but gives great results in raid us if you have 10 chances one SYNOLOGIC
RackStation RS10613xs excellent choice SAS and SATA drives.

Other copies of the SAN for VM would be excellent .This may be lower (QNAP SATA) disk with sufficient capacity to accommodate copies delas enough serious VM.

If you want help send more thoroughly all possible information of the surroundings and will be happy to help you with the size of your environment without any cost.
jcerezo@gmail.com

Best regards
Jose Vte.

We're in the process of virtualizing our web server environment and could use some help in deciding what machines to buy.
We just purchased a ProxMox VE subscription :)

Here's our situation:

We run a medium-sized web site. Everything is co-located at an ISP so the more we can do remotely, the better.
We're not going to worry about our database servers currently.

So that means we mainly need to virtualize our web and email servers, and that's about it.
I'm mainly looking to ProxMox VE to make it simpler to add new physical hosts and to upgrade VMs.
We only run Linux and am planning to use OpenVZ containers (but am open to suggestions).

We currently have 4 aging rack servers running web and mail (a couple 2U and a couple 1U).
All of them should really be replaced soon.

So, I'm looking at what to buy. I picked up Ahmed's Mastering Proxmox. I'm leaning toward buying "branded"
machines just because I don't have the time to deal with building/configuring. Our database servers
are from Dell and we have 4-hour support level with them, so my leaning is to stick with Dell, all else being equal.

Since we're not intending to run a bunch of VMs on each physical node, it seems like we don't need *that* much
ram and disk.

In looking at Dell's options, I'm thinking the R620s look reasonable. As a first stab:


  • Dual CPU, Intel® Xeon® E5-2620 2.00GHz, 15M Cache, 7.2GT/s QPI, Turbo, 6C 95W (which is what Ahmed recommended in "advanced" setup)
  • Chassis with up to 4 Hard Drives and 2 PCIe Slots
  • 16GB ram (advanced ECC)
  • PERC H310 Integrated RAID Controller
  • RAID 1 for H710P/H710/H310
  • 2 SSDs, 200GB Solid State Drive SAS Value SLC 6Gbps 2.5in Hot-plug Drive
  • Dual, Hot-plug, Redundant Power Supply (1+1), 495W
  • iDrac 7 enterprise

Dell is showing that config at $USD 5,710.97

Suggestions? Is it safe to go SSD at this point and will 2 SSD in RAID 1 be pretty robust?

And then I still need to think about shared storage...

Thanks for any pointers.

Maurice
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!