Hello, for some reason when I migrated my OpenVZ container from my SolusVM node to my Proxmox server, we saw a dramatic decrease in performance which is exactly the opposite of what was supposed to happen.
On the SolusVM node it was sharing the node with multiple other OpenVZ nodes on slower hardware. On the Proxmox server it is sharing the node with 1 other VPS (KVM) with multiple dual-core Xeons, multiple 10k SAS Drives using hardware RAID1, 8GB of RAM (1GB dedicated to the KVM, 2GB dedicated to the OpenVZ), and bonded 2Gbps ports.
I cannot for the life of me figure out what is causing the slowness. Out of the 2GB only 200MB is being used, CPU is always at 99% idle, disk I/O is showing around 30MB/s which is normally terrible but for a basic web server with very little MySQL interaction it should be fine. Network download and upload are normally less than 1Mbps. The OpenVZ server is running a few custom bash scripts, Lighttpd, MySQL, OpenSSH, and DenyHosts.
The KVM VPS is running Apache and MySQL for a PHP intensive site but has continued to run normally after the OpenVZ container was moved. It is weird because when I ran the BYTE UNIX Benchmark test on both, the KVM performed at about 1/2 of the OpenVZ. Also, when I SSH into the Proxmox server, the highest process in TOP is "kvm".
My questions are these:
1) Does mixing KVM and OpenVZ affect performance?
2) If I decide not to run KVM, can I install Proxmox without it creating an LVM2? (I think this is why our hard drive performance is so slow, we had the same issue when running XenServer but when not using an LVM, we can obtain 80MB/s write speeds with our drives).
3) Can you think of anything that I can do to help determine where the slowness is coming from? I've tried every test I can think of without any luck.
I'm afraid to make my OpenVZ into a KVM because I like being able to use vzmigrate on my primary webserver and move it between servers (I only have 1 server running Proxmox but I have dozens of SolusVM servers using OpenVZ).
Any help or ideas are appreciated.
On the SolusVM node it was sharing the node with multiple other OpenVZ nodes on slower hardware. On the Proxmox server it is sharing the node with 1 other VPS (KVM) with multiple dual-core Xeons, multiple 10k SAS Drives using hardware RAID1, 8GB of RAM (1GB dedicated to the KVM, 2GB dedicated to the OpenVZ), and bonded 2Gbps ports.
I cannot for the life of me figure out what is causing the slowness. Out of the 2GB only 200MB is being used, CPU is always at 99% idle, disk I/O is showing around 30MB/s which is normally terrible but for a basic web server with very little MySQL interaction it should be fine. Network download and upload are normally less than 1Mbps. The OpenVZ server is running a few custom bash scripts, Lighttpd, MySQL, OpenSSH, and DenyHosts.
The KVM VPS is running Apache and MySQL for a PHP intensive site but has continued to run normally after the OpenVZ container was moved. It is weird because when I ran the BYTE UNIX Benchmark test on both, the KVM performed at about 1/2 of the OpenVZ. Also, when I SSH into the Proxmox server, the highest process in TOP is "kvm".
My questions are these:
1) Does mixing KVM and OpenVZ affect performance?
2) If I decide not to run KVM, can I install Proxmox without it creating an LVM2? (I think this is why our hard drive performance is so slow, we had the same issue when running XenServer but when not using an LVM, we can obtain 80MB/s write speeds with our drives).
3) Can you think of anything that I can do to help determine where the slowness is coming from? I've tried every test I can think of without any luck.
I'm afraid to make my OpenVZ into a KVM because I like being able to use vzmigrate on my primary webserver and move it between servers (I only have 1 server running Proxmox but I have dozens of SolusVM servers using OpenVZ).
Any help or ideas are appreciated.