LXC Ubuntu Samba RAM problem

xhitm3n

New Member
Nov 10, 2015
16
0
1
Hi all,

I have and Samba Server under LXC Ubuntu, and the problem is that it consumes all the RAM after some file transfer which is normal because of the cache to ram, but it never restores that RAM, the LXC as 1GB, now it is using 800mb, if i restart the lxc on idle it consumes 80mb... how can i make it restore the used RAM?

i Think the problem is samba isnt closing the connections or something and keeps the cache... i tried to put DEADTIME = 5 in smb.conf, but the problem stills! The samba is public so everybody at home can access without a login

any ideas?
 
I check on the WEB UI the container starts wiht 80mb of usage, if transfer something he raises, but never lowers
 
I normally use Htop to see resource usage, but since its a LXC its gonna show me the Host instead... what can i do?
 
It does not show the host. It shows the used/free memory inside the container. (e.g. free command)

I've also checked with htop and it shows the container used/limit memory, not the host. If the used memory is high inside the container use top and "shift-M" to sort by used memory.

Again, I don't think the containers have their own disk cache. They are a little more complex than a simple chroot. So, if you have used memory, then a process is at fault.
 
Thanks for Replying,

I transfer a file with 1.4GB the lxc had 80mb used RAM, the transfer speed was 113Mb/s until it reaches 1GB of used memory, and starts slowing down, i tried top and Htop they both show me the host, since i gave 1CPU and 1GB of memory to the LXC, and it is showing 4CPU and 8GB which is my server specs, i sorted by memory and smbd was on top of it so it is samba, after transferring the file it hangs idle with 980mb RAM, and doesnt restore it!
 
1. you can use top/free/htop INSIDE the container. No need to use it on host if you want to check the container. I know you can see processes from inside, but it is much helpful using the tools there.
2. So your SMB process is reaching ~900MB resident size after the transfer? Is it hard to post some output from the commands you run?
 
Im using htop/top inside the container, but its showing all the resources... as for commands outputs the default noVNC i cant copy from screen only by image, i cant post images on forum yet... and cant access via PUTTY only to the host! this wasnt happening on ESXi, but it was a Virtual Machine...
 
Thanks!!! now it is showing container only at least for the RAM!!! on the web ui with noVNC its shows the host dunno why... i followed the command you entered via putty, so now i am able to give some output!

Here is the Output from top after downloading a 1.9GB, its keeping 1022mb of ram
Code:
top - 20:27:07 up 11 min,  0 users,  load average: 0.19, 0.19, 0.15
Tasks:  21 total,   1 running,  20 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.5 us,  1.1 sy,  0.0 ni, 98.0 id,  0.4 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   1048576 total,  1046324 used,     2252 free,        0 buffers
KiB Swap:  7340028 total,    27964 used,  7312064 free.  1028520 cached Mem


  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 2112 root      20   0  280984  13896  11796 S   0.0  1.3   0:00.01 smbd
  959 root      20   0   23488   7784    884 S   0.0  0.7   0:00.00 dhclient
 2114 root      20   0  280984   6700   4600 S   0.0  0.6   0:00.00 smbd
 1224 root      20   0  281716   6220   5480 S   0.0  0.6   0:00.01 accounts-d+
 1168 root      20   0   59640   5372   4692 S   0.0  0.5   0:00.00 sshd
 1978 root      20   0  199692   5164   3644 S   0.0  0.5   0:00.00 nmbd
    1 root      20   0   34696   4724   3540 S   0.0  0.5   0:00.23 systemd
  545 root      20   0   33132   4112   3832 S   0.0  0.4   0:00.03 systemd-jo+
 1312 message+  20   0   42376   3416   2996 S   0.0  0.3   0:00.00 dbus-daemon
 2164 root      20   0   18252   3276   2756 S   0.0  0.3   0:00.00 bash
 1081 syslog    20   0  190360   3112   2684 S   0.0  0.3   0:00.00 rsyslogd
 2089 postfix   20   0   27492   2828   2552 S   0.0  0.3   0:00.00 pickup
 2090 postfix   20   0   27540   2784   2504 S   0.0  0.3   0:00.00 qmgr
 2087 root      20   0   25424   2700   2424 S   0.0  0.3   0:00.00 master
 1134 root      20   0   26052   2508   2284 S   0.0  0.2   0:00.00 cron
 2261 root      20   0   22000   2496   2144 R   0.0  0.2   0:00.00 top
 1251 root      20   0   20028   2484   2240 S   0.0  0.2   0:00.00 systemd-lo+

as for Htop i cant copy it!
 
Tried to reproduce it on my side and I couldn't.
I've copied ~800MB and the "cached" value stayed the same.

I have "dir" type containers (on ZFS). Are you by chance have a loopback-mounted one (e.g. on ext4)?
 
I mounted the Physical Disk on the Host, and passed it to LXC adding this to the LXC .conf
Code:
mp0: /mnt/sdb1,mp=/mnt/sdb1
mp1: /mnt/sdb1,mp=/mnt/sdb1
the disk is mounted in /mnt/sdb1 on the host as it is on the lxc, formated with ext4

Heres is my fdisk from the host

Code:
Disk /dev/ram0: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram1: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram2: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram3: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram4: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram5: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram6: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram7: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram8: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram9: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram10: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram11: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram12: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram13: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram14: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram15: 64 MiB, 67108864 bytes, 131072 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/loop0: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/loop1: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 298.1 GiB, 320072933376 bytes, 625142448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 05279E6B-F406-4D58-B074-251BD32F5AA2


Device      Start       End   Sectors  Size Type
/dev/sda1      34      2047      2014 1007K BIOS boot
/dev/sda2    2048    262143    260096  127M EFI System
/dev/sda3  262144 625142414 624880271  298G Linux LVM


Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x290c8bdf


Device     Boot Start       End   Sectors   Size Id Type
/dev/sdb1        2048 976773167 976771120 465.8G 83 Linux


Disk /dev/mapper/pve-root: 74.3 GiB, 79725330432 bytes, 155713536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-swap: 7 GiB, 7516192768 bytes, 14680064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/pve-data: 200.7 GiB, 215520116736 bytes, 420937728 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Here is my smb.conf
Code:
[global]
        server string = %h server (Samba, Ubuntu)
        server role = standalone server
        map to guest = Bad User
        obey pam restrictions = Yes
        pam password change = Yes
        passwd program = /usr/bin/passwd %u
        passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
        unix password sync = Yes
        syslog = 0
        log file = /var/log/samba/log.%m
        max log size = 1000
        deadtime = 5
        socket options = TCP_NODELAY IPTOS_LOWDELAY
        preferred master = Yes
        dns proxy = No
        usershare allow guests = Yes
        panic action = /usr/share/samba/panic-action %d
        idmap config * : backend = tdb


[printers]
        comment = All Printers
        path = /var/spool/samba
        create mask = 0700
        printable = Yes
        print ok = Yes
        browseable = No


[print$]
        comment = Printer Drivers
        path = /var/lib/samba/printers


[xhitfile]
        path = /mnt/sdb1
        read only = No
        guest ok = Yes


My Problem is exactly like on this forum
http://forum.tinycorelinux.net/index.php?topic=12276.0
 
Last edited:
Ok i found something, the problem is that LXC is caching my Transferred files to the DISK, Example:

If i transfer a file from my PC to the LXC Samba Server it starts caching on the RAM if the file stays there the cache stays as well, but if i transfer the file back again to my PC the RAM is restored...

What can i do to prevent this?
Please Advice!
thanks
 
What is wrong with caching the data blocks in the RAM ? The kernel will flush the cache when he needs the RAM for something else, like starting a new process.
This happens in the background and should not normally need user handling.
 
Yes know its normal behavior, but since it is not going to need new process, its gonna need new cache for disk, and it wont flush it, so thats why my samba share starts slowing down.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!