Need Help Setting Up New Server and Transferring my Old

Danielc1234

Active Member
Jan 10, 2010
82
0
26
Hi all. Love Proxmox!!! What a great software.

We currently have one server running all our centos 5, php, apache, mysql and a simple email server. We have around 15 domains on it and are heavy php and mysql hogs.

My question is what is going to be the best setup for my new server running the Proxmox VT server?

My new system has 300gb @ 10,000rpm...running raid 10, i7 chip, 12gb memory, etc.
So I was hoping to find the most optimized and best way to set up the new server with what configurations would be best for my scenario.

I would love to hear any ideas or suggestions.
Thanks
 
Raid10 seems to be the nicest compromise between speed and redundancy, but you need 4 disks and you can only use half the capacity.

I tend to always have a separate disk outside of raid for log files and swap, things like that that aren't particularly mission critical.

Feel free to discard this - I am not an expert by any stretch. I am the guy who has managed to destroy the production server twice :)
 
This is good to know, but I still need to get some help with my initial setup! I dont want to start off with a bad setup and then have to deal with it later.
 
Hi all. Love Proxmox!!! What a great software.

We currently have one server running all our centos 5, php, apache, mysql and a simple email server. We have around 15 domains on it and are heavy php and mysql hogs.

My question is what is going to be the best setup for my new server running the Proxmox VT server?

My new system has 300gb @ 10,000rpm...running raid 10, i7 chip, 12gb memory, etc.
So I was hoping to find the most optimized and best way to set up the new server with what configurations would be best for my scenario.

I would love to hear any ideas or suggestions.
Thanks
Hi,
how big is your raid? Is 300gb one disk or do you have 4 * 147GB-disks in raid-10?
Are the VMs all (or most) openvz? Or kvm?

Udo
 
Udo, we have Raid 10 with 4 - 300gb each drives.
That is the thing...I haven't set any of it up yet, because I was hoping to get some insight on which is going to be the best way to set up the environment, etc.
I listed above what we are needing to run on this new server and that is why I was hoping to get some information before I start creating all the settings.
 
Udo, we have Raid 10 with 4 - 300gb each drives.
That is the thing...I haven't set any of it up yet, because I was hoping to get some insight on which is going to be the best way to set up the environment, etc.
I listed above what we are needing to run on this new server and that is why I was hoping to get some information before I start creating all the settings.

Hi,
you don't write about openvz and kvm.
If you use openvz, then the easy way is to install pve 1.5 and all is fine.
For kvm is external storage very nice (live-migration). With two hosts, is drbd like external storage - but not very fast.
So, if you want to use drbd, you need an partition (or raid-slice) for this.

If you like handwork, you can partitioning your raid manually, to get the lvm-partition aligned (on disk-block-bounderies) - this can speedup some disk-access (search in this forum). I do that with an fresh install to an single sata-disk; take a second sata-disk (disk2) and boot a live-cd (grml); tar the content from pve-root and pve-data on two tar-archives to disk2. Make an install on the raid; connect disk2 and boot again the live-cd. Remove the lvm-partion on the raid and create a new one whitch are aligned. Create volumegroup, logical volumes (root, swap and data - leave 4GB free in the vg), make filesystem (ext3) and swap on the lvs and retar the content from disk2.
Reboot - and all should work. This has the advantage that the root-partition can be smaller and the data-partition (for the VMs) instead greater as with the install-cd.

Udo
 
I was thinking of making three VMs KVM, and having VM1 - Centos Running Appache/PHP VM2 - Centos running Scalix Mail Server and VM3 - Centos running MySQL.
I was told to make sure the network is set to Virtio Drivers.
I was just wondering how much cpu, disk space and memory I should allocate for each VM?
And do you think this setup would be good to have it this way?
 

Attachments

  • Proxmox setup screen.png
    Proxmox setup screen.png
    71.1 KB · Views: 10
Why not openvz - they are more efficient as they don't actually virtualise anything.
 
I was told otherwise. I am just trying to make it as fast as possible with the flexibility to be able to move things to other servers. Later on we will be buying another server to dedicate the SQL, so I was hoping that this way I could just copy over what we have created.
 
I think you should read up on openvz - it has to do a lot less work (it doesn't virtualize anything) than KVM (which has to virtualise everything). Sure, you can get performance enhancements (like the virtio drivers) but openvz will still be quite a bit faster (at least it is on my machine).

If you are using proxmox then you can migrate (backup and restore) either containers or KVM machines between hosts. If they are in a cluster, then you can even do live migration!

Do bear in mind though that you cannot use containers with the .32 kernel
 
yatesco...then don't you think it would be wise to set up two KVM's on my machine? One running my SQL and the other KVM running my apache, email server, web server and each of those with in a OpenVZ?
I understand what OpenVZ is but am having a hard time distinguishing the difference between KVM and OpenVZ. I know that KVM is more like an actual server that runs virtually? And as I understand it the virtio drivers is a big deal when it comes to enhanced speed.
 
Hiya,

If it helps, I have a single physical machine with 8GB RAM and 1 Xeon quad core CPU and I run about 6 containers (wiki, ldap, web site etc.), 3 Linux KVM machines for Java apps (cannot get JDK to run in a container) and 2 windows KVM machines. Admittedly, none of the machines are exactly heavy load (except one which is a continuous integration server) but the host machine is bored out of its mind.

The performance of the containers is better than the KVM machines and the load on the server is less. If you want to run a Linux machine and you don't need Java, choose the container. If you want to run a windows machine, or you need Java, choose KVM.

OpenVZ works alongside your underlying distribution, sharing resources and files whereas KVM provides essentially a BIOS. There is much more work involved in using KVM. This is all subjective - I don't really notice any performance difference in 'the real world' but academically, KVM works much harder.

And yes, I much prefer one virtual machine to one purpose/service :)

I would suggest you use containers first of all. As a user, it is unlikely you will notice the difference - you can install whatever Linux distribution you want in a container or KVM. Containers are also easier to administer during the initial creation, for example with KVM you have to manually edit /etc/network/interfaces (if you are using Debian).
 
This machine is 100% used for web sites, emails, etc. which requires mysql, php, apache, etc. We host around 15 websites on it and are looking to add more, thus why we are trying to upgrade into a beefier machine. There seems to be a fine line as to which method to use between a container and the KVM. I think you can only use the virtio drivers in KVMs? Also when you mentioned Java apps...are you talking about javascript that the websites are using? Our software for our sites are very PHP and Mysql hogs, so that was something else for us to consider.
 
By Java I mean Java the programming language produced by Sun, not JavaScript.

You don't *need* virtio with OpenVZ because the IO isn't virtualised, hence the performance benefit of OpenVZ.

You will get the best bang for your buck using containers - would you have one container for each website or one container for all of them?
 
Well as you can clearly tell...I am very new to all of this. I would most likely believe that we should have one container with all the websites. Do you think that would be the best scenario? Really the only thing I was thinking I would need to 'keep separate' is the MySql...because later on I wanted to get another dedicated 'real' machine to handle just the mysql since each website is using mysql databases to run their websites.

It's all a little confusing to me, especially when I really want a handle on what I need to do when I initially set this thing up.
 
The best thing you can do is go ahead and try it - one container for mysql and one for apache/php.

Make sure there is a template available for your OS of choice (which is?) from http://pve.proxmox.com/wiki/Get_Virtual_Appliances. Download it (if it isn't already there) using the proxmox GUI, and then create two containers (each with their own IP obviously).
 
I have not seen any containers with the OS of Centos 5 64bit with PHP, Apache, etc. So I guess I will have to install the Centos 5 and then install the other software needed?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!