[SOLVED] Proxmox 4.1 + ZFS + LXC. Memory limits...

Shmon

New Member
Dec 15, 2015
10
0
1
32
Hi. It's some difficult to write here about problem, because of language (I'm from Russia).

So.
PVE Manager: 4.1-1
Kernel version: Linux 4.2.6-1-pve
Root FS: ZFS (Raidz-1, rpool)
CTs storage: ZFS (rpool)

root@local:/home# zpool status
pool: rpool
state: ONLINE
scan: resilvered 320K in 0h0m with 0 errors on Wed Dec 9 15:11:07 2015
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0

errors: No known data errors

No cluster, no kvm. Only LXC cts.
Host machine: Intel SR1530HSH 1U (1xIntel Xeon X3360, 8Gb DDR2)

root@local:/# cat /etc/modprobe.d/zfs.conf
# Min 512MB / Max 2048 MB Limit
options zfs zfs_arc_min=536870912
options zfs zfs_arc_max=2147483648

So, my problem. All cts are works normal, but memory usage is strange. For example, absolutly new CT with debian 8 from templates - at start:
shmon@test-files:~$ free -m
total used free shared buffers cached
Mem: 1024 18 1005 66 0 2
-/+ buffers/cache: 15 1008
Swap: 512 0 512

But when i start some file operations (for example, rsync) - my memory ends for few seconds. No different if there is 1Gb or 8GB of RAM in CT.

If i stop rsync with CTRL+C, memory usage is not reduced, until i run
root@local:/# echo 3 > /proc/sys/vm/drop_caches
On my host machine.

When my "not test CTs" work with files, they use all RAM and after some time i have a lot of errors about memory limits and about kills of processes.

As i think, ZFS "eats" memory for cache and this memory is calculates to container but not to node.
In physical server ZFS uses only RAM what is not needed for other applications. But here cgroup see memory usage in container and stop it by killing processes.

While CT in normal work (saturday, sunday - nobody works with him) - memory usage increases (http://f5.s.qip.ru/qYwKH6tT.png)

This problem was started after upgrade 11 December.

I don't know what to do. Where is my mistake? Thank you.
 

Nemesiz

Well-Known Member
Jan 16, 2009
678
47
48
Lithuania
Try to set ZFS ARC size manually

echo 2147483648 > /sys/module/zfs/parameters/zfs_arc_max

Its dynamical setting
 

Shmon

New Member
Dec 15, 2015
10
0
1
32
Try to set ZFS ARC size manually

echo 2147483648 > /sys/module/zfs/parameters/zfs_arc_max

Its dynamical setting

root@local:/# cat /etc/modprobe.d/zfs.conf
# Min 512MB / Max 2048 MB Limit
options zfs zfs_arc_min=536870912
options zfs zfs_arc_max=2147483648

This way is work, limit for zfs is work. But the problem is because of limits in containers. ZFS cache shouldn't be calculated for container - it should be a usage of node. But Proxmox (or LXC?) claculates it to CT memory usage.

PS:
root@local:~# cat /proc/spl/kstat/zfs/arcstats
6 1 0x01 91 4368 1616164350 1702104547500
name type data
hits 4 453188
misses 4 89862
demand_data_hits 4 370709
demand_data_misses 4 36254
demand_metadata_hits 4 65512
demand_metadata_misses 4 40953
prefetch_data_hits 4 1068
prefetch_data_misses 4 3632
prefetch_metadata_hits 4 15899
prefetch_metadata_misses 4 9023
mru_hits 4 115133
mru_ghost_hits 4 0
mfu_hits 4 321089
mfu_ghost_hits 4 0
deleted 4 33
mutex_miss 4 0
evict_skip 4 5
evict_not_enough 4 0
evict_l2_cached 4 0
evict_l2_eligible 4 382976
evict_l2_ineligible 4 0
evict_l2_skip 4 0
hash_elements 4 33660
hash_elements_max 4 33663
hash_collisions 4 1581
hash_chains 4 485
hash_chain_max 4 2
p 4 1073741824
c 4 2147483648
c_min 4 536870912
c_max 4 2147483648
size 4 1356913704
hdr_size 4 14272688
data_size 4 1100441600
metadata_size 4 195747328
other_size 4 46452088
anon_size 4 147456
anon_evictable_data 4 0
anon_evictable_metadata 4 0
mru_size 4 707293696
mru_evictable_data 4 529338880
mru_evictable_metadata 4 131778560
mru_ghost_size 4 0
mru_ghost_evictable_data 4 0
mru_ghost_evictable_metadata 4 0
mfu_size 4 588747776
mfu_evictable_data 4 570971648
mfu_evictable_metadata 4 5301760
mfu_ghost_size 4 0
mfu_ghost_evictable_data 4 0
mfu_ghost_evictable_metadata 4 0
l2_hits 4 0
l2_misses 4 0
l2_feeds 4 0
l2_rw_clash 4 0
l2_read_bytes 4 0
l2_write_bytes 4 0
l2_writes_sent 4 0
l2_writes_done 4 0
l2_writes_error 4 0
l2_writes_lock_retry 4 0
l2_evict_lock_retry 4 0
l2_evict_reading 4 0
l2_evict_l1cached 4 0
l2_free_on_write 4 0
l2_cdata_free_on_write 4 0
l2_abort_lowmem 4 0
l2_cksum_bad 4 0
l2_io_error 4 0
l2_size 4 0
l2_asize 4 0
l2_hdr_size 4 0
l2_compress_successes 4 0
l2_compress_zeros 4 0
l2_compress_failures 4 0
memory_throttle_count 4 0
duplicate_buffers 4 0
duplicate_buffers_size 4 0
duplicate_reads 4 0
memory_direct_count 4 0
memory_indirect_count 4 0
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 0
arc_meta_used 4 256472104
arc_meta_limit 4 2147483648
arc_meta_max 4 256486824
arc_meta_min 4 16777216
arc_need_free 4 0
arc_sys_free 4 130760704
root@local:~#
 

Nemesiz

Well-Known Member
Jan 16, 2009
678
47
48
Lithuania
Dont blame ZFS. If you want to know how much ram ZFS eat do "cat /sys/module/zfs/parameters/zfs_arc_max" to see ZFS ARC parameter and use arc_summary or arcstats to know more.

As of LXC what you can see inside ct? what does "ps aux" show ?
 

Shmon

New Member
Dec 15, 2015
10
0
1
32
Dont blame ZFS. If you want to know how much ram ZFS eat do "cat /sys/module/zfs/parameters/zfs_arc_max" to see ZFS ARC parameter and use arc_summary or arcstats to know more.

As of LXC what you can see inside ct? what does "ps aux" show ?

root@local:/home# cat /sys/module/zfs/parameters/zfs_arc_max
2147483648

root@test-files:/home/shmon# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.1 15484 1184 ? Ss 18:37 0:00 init [2]
root 810 0.0 0.2 37072 2360 ? Ss 18:37 0:00 /sbin/rpcbind -w
root 938 0.0 0.2 258664 2296 ? Ssl 18:37 0:00 /usr/sbin/rsyslogd
daemon 1004 0.0 0.0 19012 172 ? Ss 18:37 0:00 /usr/sbin/atd
root 1030 0.0 0.1 25892 2036 ? Ss 18:37 0:00 /usr/sbin/cron
root 1118 0.0 0.2 55168 2540 ? Ss 18:37 0:00 /usr/sbin/sshd
message+ 1136 0.0 0.1 42116 1980 ? Ss 18:37 0:00 /usr/bin/dbus-daemon --system
root 1188 0.0 0.3 36152 3492 ? Ss 18:37 0:00 /usr/lib/postfix/master
postfix 1211 0.0 0.3 38216 3464 ? S 18:37 0:00 pickup -l -t unix -u -c
postfix 1212 0.0 0.3 38264 3476 ? S 18:37 0:00 qmgr -l -t unix -u
root 1214 0.0 0.1 12656 1804 tty1 Ss+ 18:37 0:00 /sbin/getty --noclear 38400 tty1
root 1215 0.0 0.1 12656 1816 tty2 Ss+ 18:37 0:00 /sbin/getty --noclear 38400 tty2
root 1222 0.0 0.4 82704 4836 ? Ss 18:37 0:00 sshd: shmon [priv]
shmon 1224 0.0 0.3 82704 3388 ? S 18:38 0:00 sshd: shmon@pts/2
shmon 1225 0.0 0.3 21156 3944 pts/2 Ss 18:38 0:00 -bash
root 1234 0.0 0.2 44744 2548 pts/2 S 18:38 0:00 su
root 1235 0.0 0.2 20248 3080 pts/2 S 18:38 0:00 bash
root 1239 0.0 0.1 17492 2088 pts/2 R+ 18:38 0:00 ps aux

Да я и не виню никого.
I don't blame something but me) But it worked normally some days ago before upgrading.
 

Nemesiz

Well-Known Member
Jan 16, 2009
678
47
48
Lithuania
One thing is statistics another out of memory situation. Then it happens your all system ruin out of memory ( or reach almost it ) or it happens only inside ct ?
 

Shmon

New Member
Dec 15, 2015
10
0
1
32
One thing is statistics another out of memory situation. Then it happens your all system ruin out of memory ( or reach almost it ) or it happens only inside ct ?

When it happens inside ct, node start killing processes in ct. And i can't stop/shutdown ct, i cant connect to this ct by web console or ssh.
If i start, for example, rsync and stop it after 2-3 minutes, in ct is used f.e. 800Mb of ram. Why it doesn't clear? If i'll start rsync again, it'll eat more memory...next - swap .
 

RobFantini

Renowned Member
May 24, 2012
1,946
79
68
Boston,Mass
I have the same issue. see thread 25136 .

In our case zfs send / receive caused the out of memory .

That system has 64GB of memory with only 16GB normally used.
 

Shmon

New Member
Dec 15, 2015
10
0
1
32
Now i'm testing this situation in Proxmox 4.0.
All same parameters and scripts.

root@test-gl:~# free -m
total used free shared buffers cached
Mem: 1024 76 947 60 0 61
-/+ buffers/cache: 15 1008
Swap: 7167 0 7167
So, i think it is bug. I'll downgrade my servers. (But in 4.0 there isn't stat about cpu usage :( )
 

Nemesiz

Well-Known Member
Jan 16, 2009
678
47
48
Lithuania
This versions of proxmox works normally
Code:
# pveversion -v
proxmox-ve: 4.0-22 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-36
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie
openvswitch-switch: 2.3.2-1
 

Shmon

New Member
Dec 15, 2015
10
0
1
32
This versions of proxmox works normally

Thank you)
apt-get install proxmox-ve=4.0-22
apt-get install pve-container=1.0-21
apt-get install lxc-pve=1.1.4-3
apt-get install pve-manager=4.0-57
reboot

And memory usage is normal) But i have to reinstall all node, because of a lot of tests that were for some last days) And i have a lot of trash on the node.

Thank you all for participating.

PS: Sory for my english :)
 

Shmon

New Member
Dec 15, 2015
10
0
1
32
Some other problems, that have been fixed after downgrade:
  • Different troubles with directory permissios in samba directories;
  • In CT with redmine (turnkey's template) the file "/var/run/mysqld/mysqld.sock" in PM 4.1 was created with 775, but before updating and after downgrading - this file creates with 777.
If anybody can write about these problems into bugtracker, it'll be exellent. It's some difficult for me :(
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!