Hello.
I was running Proxmox 3.x on my server for a year but I experience slow performances after upgrading to Proxmox 4 (I did a clean install from OVH Proxmox 4 template and restored the containers manually).
I am hosting basic apps for my own usage (owncloud, seafile, openvpn, emby, sonarr, couchpotato, deluge...). It used to work like a charm in the past and now everything is slow to respond.
For example, connecting via SSH to the host or a container is unusually long (it takes couple of seconds just to get a reply from the server), connecting to my VPN container can take up to 30 secs when usually it is done in 5 secs, installing a simple package like "tree" can take a minute or two in a container, listing files or auto-completion in bash takes couple of seconds...
I noted in Proxmox UI a unusual high IO delay. With all the containers stopped, just doing aptitude update on the host raises IO delay between 5 - 10 %. All the containers running iddle, there is a permanent IO delay of 1 to 5 % (used to be 0 all the time with Proxmox 3). Downloading one or two torrents from Deluge causes IO delay to raise to 70% !
So I don't understand what is happening, IO delay used to be low all the time with Proxmox 3 expect when I was doing backups or copying files (which is an expected behaviour).
My server is a Kimsufi with an Intel i5 - 4 cores, 16 GB of memory (around 5 GB used at the moment) and 2 TB hard drive.
Here is the result of pveperf when server is iddle with no container running:
Running "iotop" on the host while torrents are downloading shows processes consuming all the IOs:
I don't know how to identify the root cause, any suggestion ?
I was running Proxmox 3.x on my server for a year but I experience slow performances after upgrading to Proxmox 4 (I did a clean install from OVH Proxmox 4 template and restored the containers manually).
I am hosting basic apps for my own usage (owncloud, seafile, openvpn, emby, sonarr, couchpotato, deluge...). It used to work like a charm in the past and now everything is slow to respond.
For example, connecting via SSH to the host or a container is unusually long (it takes couple of seconds just to get a reply from the server), connecting to my VPN container can take up to 30 secs when usually it is done in 5 secs, installing a simple package like "tree" can take a minute or two in a container, listing files or auto-completion in bash takes couple of seconds...
I noted in Proxmox UI a unusual high IO delay. With all the containers stopped, just doing aptitude update on the host raises IO delay between 5 - 10 %. All the containers running iddle, there is a permanent IO delay of 1 to 5 % (used to be 0 all the time with Proxmox 3). Downloading one or two torrents from Deluge causes IO delay to raise to 70% !
So I don't understand what is happening, IO delay used to be low all the time with Proxmox 3 expect when I was doing backups or copying files (which is an expected behaviour).
My server is a Kimsufi with an Intel i5 - 4 cores, 16 GB of memory (around 5 GB used at the moment) and 2 TB hard drive.
Here is the result of pveperf when server is iddle with no container running:
Code:
CPU BOGOMIPS: 21331.24
REGEX/SECOND: 1355841
HD SIZE: 19.10 GB (/dev/sda2)
BUFFERED READS: 155.27 MB/sec
AVERAGE SEEK TIME: 7.15 ms
FSYNCS/SECOND: 34.23
DNS EXT: 46.06 ms
DNS INT: 1005.14 ms (xxxx.me)
Running "iotop" on the host while torrents are downloading shows processes consuming all the IOs:
Code:
Total DISK READ : 5.16 M/s | Total DISK WRITE : 4.09 M/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 2.42 M/s
TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND
21322 be/4 messageb 5.16 M/s 2.29 M/s 0.00 % 96.37 % python /usr/bin/deluged --port=58846 --config=/var/lib/deluge/.config/deluge
574 be/3 root 0.00 B/s 0.00 B/s 0.00 % 92.26 % [jbd2/dm-0-8]
11111 be/3 root 0.00 B/s 0.00 B/s 0.00 % 88.65 % [jbd2/loop10-8]
12310 be/4 root 0.00 B/s 61.28 K/s 0.00 % 18.55 % [nfsd]
12306 be/4 root 0.00 B/s 91.92 K/s 0.00 % 18.52 % [nfsd]
12307 be/4 root 0.00 B/s 153.20 K/s 0.00 % 18.52 % [nfsd]
12308 be/4 root 0.00 B/s 76.60 K/s 0.00 % 11.59 % [nfsd]
12309 be/4 root 0.00 B/s 61.28 K/s 0.00 % 11.58 % [nfsd]
12313 be/4 root 0.00 B/s 107.24 K/s 0.00 % 10.30 % [nfsd]
12311 be/4 root 0.00 B/s 107.24 K/s 0.00 % 9.56 % [nfsd]
12312 be/4 root 0.00 B/s 107.24 K/s 0.00 % 7.10 % [nfsd]
5043 be/0 root 0.00 B/s 1053.25 K/s 0.00 % 0.75 % [kworker/u17:53]
208 be/3 root 0.00 B/s 0.00 B/s 0.00 % 0.62 % [jbd2/sda2-8]
364 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.02 % [kmmpd-loop5]
26999 be/0 root 0.00 B/s 3.83 K/s 0.00 % 0.00 % [kworker/u17:6]
15367 be/4 root 0.00 B/s 3.83 K/s 0.00 % 0.00 % pmxcfs
31593 be/4 root 0.00 B/s 7.66 K/s 0.00 % 0.00 % rsyslogd -c5 [rs:main Q:Reg]
1432 be/4 root 0.00 B/s 11.49 K/s 0.00 % 0.00 % pmxcfs
1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init
2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd]
3 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [ksoftirqd/0]
5 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/0:0H]
7 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_sched]
8 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_bh]
I don't know how to identify the root cause, any suggestion ?