Server sometimes slow

the way openvz works - in short - what is running on the virtual is running on the main host...

do me a favor - stop the mysql sessions and see if the load goes down -
if it does - than thats a total of 4 folks that I know of - with this same issue

When mysql is off - the load goes away

:confused:
 
I understand.. the load varies... at times it gets very high and things become very slow... sometimes even the ssh session to the host or the web interface of proxmox hangs for a few seconds...

This is so frustrating.
 
is mlocate something added recently ?
i know we did not place this into the /etc/cron.daily ?

AFAIK that is standard. But you can try to exclude '/var/lib/vz' from being scanned in "/etc/updatedb.conf" (see 'man updatedb').

Code:
PRUNEPATHS="/tmp /var/spool /media /var/lib/vz"

Does that help?
 
Dietmar,

Great job on the 1.4 release.

Any idea if 1.4 fixes the performance issue addressed in this thread? It does not seem to have changed anything on my end.

Let us know.
Thanks.
 
typo3usa,

Are you confirming that reverting to proxmox 1.2 would get me back to the version that did not have this issue or do I have to go back to 1.1?

I am surprised that the great proxmox guys are not saying much about this issue. It is really sad to have to revert back and this could cause many people to drop proxmox or at least proxmox 1.4 given that most people use database in their systems.
 
Good to know. And how did you manage with your VPS? I guess I would have to back them up externally and re-install then.
 
I am surprised that the great proxmox guys are not saying much about this issue.

Unfortunately, you guys do not answer my questions. And the reported error case is confusing. First, typo3usa claimed that there are high loads when no VM is running - is that still true? Can someone confirm that behaviour?

Or is it only true if there are VMs running mysql?

It is impossible to find the bug if you do not help to track it down.
 
Ok Dietmar,

Do you have any particular question for me?

I posted the top and iotop output already but have not had a reaction from you. Is there anything else that I can do to help you diagnose the issue?
 
I posted the top and iotop output already but have not had a reaction from you. Is there anything else that I can do to help you diagnose the issue?

What you posted was quite usless because it just conains a few processes running inside the VMs.

Anyways, type3usa claimed that there are high IO delays when no VMs are running. Can you confirm that? If so, please post the output of 'ps auxww' (when no VM are running).
 
Unfortunately, you guys do not answer my questions. And the reported error case is confusing. First, typo3usa claimed that there are high loads when no VM is running - is that still true? Can someone confirm that behaviour?

Or is it only true if there are VMs running mysql?

It is impossible to find the bug if you do not help to track it down.

if this helps

(sorry not @ the office and remote to those servers is not an option where I am @ present - ssh only good to certain ip ranges for security)

anyhow -

i found was due to an mlocate running against all of the server.
I limited mlocate from hitting the vz ranges - and that helped on that account.

For the other load - they drop 100% when we stop mysql from running.

I am not 100% sure if you guys should use us as the prime example
reason being we are running some heavier usage than most

The server is an 8 Core Xeon system with 4 virtual servers running and 18GB of Ram.

All virtual servers are openvz.

One virtual server is roughly 750GB in size - and we have been having constant issues with quota on it (sadly) - it runs cPanel.

Another virtual server is roughly 25b running SVN - ubuntu flavor openvz
no real issues on that except we do see cpu get high @ times

Another virtual server is running syslong-ng

Another cPanel virtual server running

When we disable mysql - the host nodes disk i/o drops to 0 or really close
when mysql is running - its as high as 70.x wait times

[ we moved mysql off that system onto a much lower class system - and that system does not struggle at all. ]

we have a few other servers operating much the same way
running a few different flavors of proxmox

some 1.3, some 1.2 others 1.1 and a few 1.4 now.

the 1.3 and 1.4 servers seem to have a varied level of loads
been trying to chase them a bit

the 1.1 and 1.2 servers do not have the load issues however...

While I have chimed in - i think it warrants further review by folks to see what they are running - and perhaps - to offer Dietmar the ability to gain access (if possible) to the systems to be able to tell what is happening and/or not happening across these systems

We can whine - but giving the proxmox dev folks the ability to gain as much information might help

one last note - the 1.4 stuff some we have moved over to using isci now - so those systems have pretty much zero load for disk i/o

That being said ...

ProxMox has done an awesome job putting all of this together and deserve some serious credit for a job well done.
 
Last edited:
NIOTE: when you join a node to the cluster, all templates and iso images are 'rsynced' to all nodes - Maybe that explains the high IO-load?

that is good to know...

We however do not have a large amount of iso images and/or templates
 
Try removing mlocate, this may impose _great_ load on your IO.


Also, please install "sysstat" package and enable it in /etc/default/sysstat.

When some time passes, run "sar" - it will show you when the server is loaded and when it's not.

Typically, it can be some cronjob.
 
Try removing mlocate, this may impose _great_ load on your IO.


Also, please install "sysstat" package and enable it in /etc/default/sysstat.

When some time passes, run "sar" - it will show you when the server is loaded and when it's not.

Typically, it can be some cronjob.

you could just configure mlocate to ignore different (and large) directories as well -

our sar showing:

time: device read/s rdKb/s write/s wrKb/s rdwr/s _disk_
04:30:01 disk008-000 89.94 2588.18 166.91 4240.67 256.85
04:40:01 disk008-000 68.29 1275.69 155.94 4317.52 224.23
04:50:01 disk008-000 9.31 121.69 160.23 4338.11 169.54
05:00:01 disk008-000 8.72 124.08 161.72 4120.00 170.44
05:10:01 disk008-000 8.28 171.14 160.45 4054.25 168.74
05:20:01 disk008-000 12.34 303.10 188.17 5054.48 200.51
05:30:01 disk008-000 7.18 102.60 159.01 3643.48 166.19
05:40:01 disk008-000 11.57 103.96 170.11 4103.74 181.68
05:50:01 disk008-000 8.25 131.66 165.02 4162.74 173.28
06:00:01 disk008-000 127.42 11700.96 161.66 4602.04 289.09
06:10:01 disk008-000 245.13 1398.37 159.30 4771.15 404.42

time: partition busy read/s Kbyt/r write/s Kbyt/w avque avserv _part_

04:30:01 sda (8-0) 64% 89.94 28.8 166.91 25.4 17.08 2.50 ms
sda2 611.45 4.2 1060.15 4.0
04:40:01 sda (8-0) 45% 68.29 18.7 155.94 27.7 13.37 2.00 ms
sda2 293.59 4.3 1079.36 4.0
04:50:01 sda (8-0) 18% 9.31 13.1 160.23 27.1 6.96 1.03 ms
sda2 10.67 11.4 1084.52 4.0
05:00:01 sda (8-0) 17% 8.72 14.2 161.72 25.5 5.46 0.97 ms
sda2 10.62 11.7 1030.00 4.0
05:10:01 sda (8-0) 17% 8.28 20.7 160.45 25.3 7.73 1.02 ms
sda2 9.81 17.4 1013.56 4.0
05:20:01 sda (8-0) 23% 12.34 24.6 188.17 26.9 9.10 1.13 ms
sda2 15.93 19.0 1263.61 4.0
05:30:01 sda (8-0) 15% 7.18 14.3 159.01 22.9 5.85 0.93 ms
sda2 8.34 12.3 910.87 4.0
05:40:01 sda (8-0) 22% 11.57 9.0 170.11 24.1 7.69 1.19 ms
sda2 12.57 8.3 1025.93 4.0
05:50:01 sda (8-0) 18% 8.25 16.0 165.02 25.2 7.01 1.04 ms
sda2 9.57 13.8 1040.68 4.0
06:00:01 sda (8-0) 58% 127.42 91.8 161.66 28.5 12.94 2.01 ms
sda2 172.65 67.8 1150.51 4.0
06:10:01 sda (8-0) 99% 245.13 5.7 159.30 30.0 8.52 2.45 ms
sda2 252.43 5.5 1192.80 4.0
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!