Hello
I am currently using proxmox VE 3.3 to virtualize some machines on a SOHO server. While all is currently running OK, I am left with some problems that I have a hard time to solve. For example, one of the VM is a Centos server which basically runs everything. When something crashes, it is difficult to determine the cause because there are many hundreds processes interacting together. Another shotcoming of having everything in the same VM is when some services are having network issues caused by Snort or another firewall process running on my virtualized pfsense router. Snort will issue an alert and block the client IP but while it tells me which VM caused the alert, it doesnt tell me which service. For these reasons, having several VM's to share the workload seems better than a single huge VM.
I may be wrong there, if so please correct me.
So assuming I split the current single huge VM in 5 or 6 identical KVM Centos VM's running each their service (one will be a FTP server, another one will be a mail server, another a DMS server, another a replication server, etc):
-How would virtio drivers and resource management be done between VM's? Because I will probably overcommit, I am thinking about sharing RAM and CPU cores.. In other words, are all necessary drivers already loaded in modern Linux distros? Do I need to do anything on the host system (PVE) at all?
-My server has 64GB RAM and 12 cores (real ones, not HT). A research paper from IBM (http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatbestpractices_pdf.pdf) states that the number of assigned cores in a VM should always be as low as required to perform the job. Other sources stated that with KVM the total of all assigned vCPU's shoud always be less than the actual CPU core count of the host (for best performance). Finally, I also read that the maximum ratio of vCPU to core is 8:1 so with 12 real cores, that would mean a maximum of 96 vCPUs shared among all VM's... Lots of documentation out there but no common agreement.. RAM overcommitment seems to be also anywhere between 1.3 and 1.5 to that of the real host RAM.
Assuming for the sake of explanation that I end up with the following scenario, would this configuration be acceptable?
Core [-][ RAM [GB]
Centos VM1 (database server) 4 32
Centos VM2 (web server) 2 8
Centos VM3 (multipurpose server) 2 8
Centos VM4 (DMS server) 2 8
Centos VM5 (mail server) 2 8
Centos VM6 (video surveillance server) 4 8
Ubuntu server (another database server) 2 4
pfSense router 2 4
Sum of all 20 80
For my server thats a CPU overcommitment of 1.67 and RAM of 1.25..
-Final question: lets say the config above is acceptable and I implement it, I will also need to seldomly (rarely) run a Windows XP VM, I am considering assigning it 2 cores and 512MB RAM. Any problems starting the Windows machine while everything else is in action?
I am hoping these questions will generate a few ideas so please share what you think and your advices!!
Thanks!
I am currently using proxmox VE 3.3 to virtualize some machines on a SOHO server. While all is currently running OK, I am left with some problems that I have a hard time to solve. For example, one of the VM is a Centos server which basically runs everything. When something crashes, it is difficult to determine the cause because there are many hundreds processes interacting together. Another shotcoming of having everything in the same VM is when some services are having network issues caused by Snort or another firewall process running on my virtualized pfsense router. Snort will issue an alert and block the client IP but while it tells me which VM caused the alert, it doesnt tell me which service. For these reasons, having several VM's to share the workload seems better than a single huge VM.
I may be wrong there, if so please correct me.
So assuming I split the current single huge VM in 5 or 6 identical KVM Centos VM's running each their service (one will be a FTP server, another one will be a mail server, another a DMS server, another a replication server, etc):
-How would virtio drivers and resource management be done between VM's? Because I will probably overcommit, I am thinking about sharing RAM and CPU cores.. In other words, are all necessary drivers already loaded in modern Linux distros? Do I need to do anything on the host system (PVE) at all?
-My server has 64GB RAM and 12 cores (real ones, not HT). A research paper from IBM (http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatbestpractices_pdf.pdf) states that the number of assigned cores in a VM should always be as low as required to perform the job. Other sources stated that with KVM the total of all assigned vCPU's shoud always be less than the actual CPU core count of the host (for best performance). Finally, I also read that the maximum ratio of vCPU to core is 8:1 so with 12 real cores, that would mean a maximum of 96 vCPUs shared among all VM's... Lots of documentation out there but no common agreement.. RAM overcommitment seems to be also anywhere between 1.3 and 1.5 to that of the real host RAM.
Assuming for the sake of explanation that I end up with the following scenario, would this configuration be acceptable?
Core [-][ RAM [GB]
Centos VM1 (database server) 4 32
Centos VM2 (web server) 2 8
Centos VM3 (multipurpose server) 2 8
Centos VM4 (DMS server) 2 8
Centos VM5 (mail server) 2 8
Centos VM6 (video surveillance server) 4 8
Ubuntu server (another database server) 2 4
pfSense router 2 4
Sum of all 20 80
For my server thats a CPU overcommitment of 1.67 and RAM of 1.25..
-Final question: lets say the config above is acceptable and I implement it, I will also need to seldomly (rarely) run a Windows XP VM, I am considering assigning it 2 cores and 512MB RAM. Any problems starting the Windows machine while everything else is in action?
I am hoping these questions will generate a few ideas so please share what you think and your advices!!
Thanks!