Hi. It's some difficult to write here about problem, because of language (I'm from Russia).
So.
PVE Manager: 4.1-1
Kernel version: Linux 4.2.6-1-pve
Root FS: ZFS (Raidz-1, rpool)
CTs storage: ZFS (rpool)
No cluster, no kvm. Only LXC cts.
Host machine: Intel SR1530HSH 1U (1xIntel Xeon X3360, 8Gb DDR2)
So, my problem. All cts are works normal, but memory usage is strange. For example, absolutly new CT with debian 8 from templates - at start:
But when i start some file operations (for example, rsync) - my memory ends for few seconds. No different if there is 1Gb or 8GB of RAM in CT.
If i stop rsync with CTRL+C, memory usage is not reduced, until i run
When my "not test CTs" work with files, they use all RAM and after some time i have a lot of errors about memory limits and about kills of processes.
As i think, ZFS "eats" memory for cache and this memory is calculates to container but not to node.
In physical server ZFS uses only RAM what is not needed for other applications. But here cgroup see memory usage in container and stop it by killing processes.
While CT in normal work (saturday, sunday - nobody works with him) - memory usage increases (http://f5.s.qip.ru/qYwKH6tT.png)
This problem was started after upgrade 11 December.
I don't know what to do. Where is my mistake? Thank you.
So.
PVE Manager: 4.1-1
Kernel version: Linux 4.2.6-1-pve
Root FS: ZFS (Raidz-1, rpool)
CTs storage: ZFS (rpool)
root@local:/home# zpool status
pool: rpool
state: ONLINE
scan: resilvered 320K in 0h0m with 0 errors on Wed Dec 9 15:11:07 2015
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0
errors: No known data errors
pool: rpool
state: ONLINE
scan: resilvered 320K in 0h0m with 0 errors on Wed Dec 9 15:11:07 2015
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0
errors: No known data errors
No cluster, no kvm. Only LXC cts.
Host machine: Intel SR1530HSH 1U (1xIntel Xeon X3360, 8Gb DDR2)
root@local:/# cat /etc/modprobe.d/zfs.conf
# Min 512MB / Max 2048 MB Limit
options zfs zfs_arc_min=536870912
options zfs zfs_arc_max=2147483648
# Min 512MB / Max 2048 MB Limit
options zfs zfs_arc_min=536870912
options zfs zfs_arc_max=2147483648
So, my problem. All cts are works normal, but memory usage is strange. For example, absolutly new CT with debian 8 from templates - at start:
shmon@test-files:~$ free -m
total used free shared buffers cached
Mem: 1024 18 1005 66 0 2
-/+ buffers/cache: 15 1008
Swap: 512 0 512
total used free shared buffers cached
Mem: 1024 18 1005 66 0 2
-/+ buffers/cache: 15 1008
Swap: 512 0 512
But when i start some file operations (for example, rsync) - my memory ends for few seconds. No different if there is 1Gb or 8GB of RAM in CT.
If i stop rsync with CTRL+C, memory usage is not reduced, until i run
root@local:/# echo 3 > /proc/sys/vm/drop_caches
On my host machine. When my "not test CTs" work with files, they use all RAM and after some time i have a lot of errors about memory limits and about kills of processes.
As i think, ZFS "eats" memory for cache and this memory is calculates to container but not to node.
In physical server ZFS uses only RAM what is not needed for other applications. But here cgroup see memory usage in container and stop it by killing processes.
While CT in normal work (saturday, sunday - nobody works with him) - memory usage increases (http://f5.s.qip.ru/qYwKH6tT.png)
This problem was started after upgrade 11 December.
I don't know what to do. Where is my mistake? Thank you.