Host kernel warning accessing /proc/meminfo on Alpine containter

moshpete

New Member
Mar 24, 2020
8
4
3
47
I am new to PVE, testing version 6.1-8 on Debian 10 Buster. I am encountering a weird issue using a Alpine 3.9 unprivileged LXC container built from alpine-3.9-default-20190224_amd64.tar.xz template:

I have set 1GB RAM for the container. Here is the output of free -m:
Code:
             total       used       free     shared    buffers     cached
Mem:         32032       2797      29235         59         39         41
-/+ buffers/cache:       2715      29316
Swap:        32683          0      32683
It shows the used and free memory on the PVE host (which has 32GB of RAM), not the container.

Now here is the output of cat /proc/meminfo:
Code:
MemTotal:        1048576 kB
MemFree:          632832 kB
MemAvailable:     675600 kB
Buffers:               0 kB
Cached:            42768 kB
SwapCached:            0 kB
Active:           400884 kB
Inactive:           4620 kB
Active(anon):     362868 kB
Inactive(anon):        0 kB
Active(file):      38016 kB
Inactive(file):     4620 kB
Unevictable:           0 kB
Mlocked:            5320 kB
SwapTotal:       1048576 kB
SwapFree:        1048576 kB
Dirty:               660 kB
Writeback:             0 kB
AnonPages:        362868 kB
Mapped:            38412 kB
Shmem:                 0 kB
KReclaimable:      86668 kB
Slab:               0 kB
SReclaimable:          0 kB
SUnreclaim:            0 kB
KernelStack:        9456 kB
PageTables:        10420 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
Writeback:             0 kB
CommitLimit:    49869056 kB
Committed_AS:    4807940 kB
VmallocTotal:   34359738367 kB
VmallocUsed:      353560 kB
VmallocChunk:          0 kB
Percpu:            15680 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:               0 kB
DirectMap4k:      318828 kB
DirectMap2M:     5888000 kB
DirectMap1G:    29360128 kB

But what worries me is that when I issue cat /proc/meminfo inside the containter, the PVE host logs a kernel warning:
Code:
Apr 21 11:30:01 pvex kernel: [ 2820.797510] ------------[ cut here ]------------
Apr 21 11:30:01 pvex kernel: [ 2820.797708] WARNING: CPU: 3 PID: 23697 at lib/iov_iter.c:1162 iov_iter_pipe.cold.22+0x14/0x23
Apr 21 11:30:01 pvex kernel: [ 2820.798118] Modules linked in: veth nfsv3 nfs_acl rpcsec_gss_krb5 auth_rpcgss nfsv4 nfs lockd grace fscache ebtable
_filter ebtables ip_set ip6table_raw iptable_raw ip6table_filter ip6_tables iptable_filter bpfilter bonding softdog btrfs xor zstd_compress nfnetli
nk_log nfnetlink raid6_pq libcrc32c snd_hda_codec_hdmi intel_rapl_msr intel_rapl_common sb_edac x86_pkg_temp_thermal intel_powerclamp coretemp kvm_
intel kvm zfs(PO) irqbypass zunicode(PO) zlua(PO) zavl(PO) intel_cstate icp(PO) nouveau mxm_wmi snd_hda_intel wmi snd_intel_nhlt video snd_hda_code
c ttm snd_hda_core drm_kms_helper snd_hwdep snd_pcm drm snd_timer i2c_algo_bit snd fb_sys_fops syscopyarea sysfillrect sysimgblt input_leds intel_r
apl_perf soundcore ioatdma serio_raw pcspkr dca mac_hid zcommon(PO) znvpair(PO) spl(O) vhost_net sunrpc vhost tap ib_iser rdma_cm iw_cm ib_cm ib_co
re iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ip_tables x_tables autofs4 algif_skcipher af_alg dm_crypt crct10dif_pclmul crc32_pclmul
Apr 21 11:30:01 pvex kernel: [ 2820.798145]  ghash_clmulni_intel aesni_intel aes_x86_64 gpio_ich crypto_simd hid_generic cryptd glue_helper ahci mp
t3sas psmouse i2c_i801 libahci lpc_ich r8169 raid_class usbmouse usbkbd realtek scsi_transport_sas usbhid hid
Apr 21 11:30:01 pvex kernel: [ 2820.801483] CPU: 3 PID: 23697 Comm: cat Tainted: P        W  O      5.3.18-3-pve #1
Apr 21 11:30:01 pvex kernel: [ 2820.801483] Hardware name: To be filled by O.E.M. To be filled by O.E.M./X79, BIOS 4.6.5 06/10/2019
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RIP: 0010:iov_iter_pipe.cold.22+0x14/0x23
Apr 21 11:30:01 pvex kernel: [ 2820.801483] Code: 55 48 c7 c7 10 de d2 84 48 89 e5 e8 b1 ed bf ff 0f 0b 31 c0 5d c3 48 c7 c7 10 de d2 84 48 89 4d e8 48 89 55 f0 e8 97 ed bf ff <0f> 0b 48 8b 55 f0 48 8b 4d e8 e9 41 b5 ff ff 48 c7 c7 10 de d2 84
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RSP: 0018:ffffacbda701fcb0 EFLAGS: 00010246
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RAX: 0000000000000024 RBX: ffffacbda701fce0 RCX: 0000000000000000
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RDX: 0000000000000000 RSI: ffff8b3e1f8d7448 RDI: ffff8b3e1f8d7448
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RBP: ffffacbda701fcc8 R08: 0000000000000484 R09: 00000000ffffffff
Apr 21 11:30:01 pvex kernel: [ 2820.801483] R10: 0000000000000001 R11: 0000000000000000 R12: ffffacbda701fdd0
Apr 21 11:30:01 pvex kernel: [ 2820.801483] R13: 0000000000000000 R14: ffffacbda701fdd0 R15: ffff8b3d5747c500
Apr 21 11:30:01 pvex kernel: [ 2820.801483] FS:  00007eff6fde9b68(0000) GS:ffff8b3e1f8c0000(0000) knlGS:0000000000000000
Apr 21 11:30:01 pvex kernel: [ 2820.801483] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Apr 21 11:30:01 pvex kernel: [ 2820.801483] CR2: 000055e23b96bb07 CR3: 00000007649c6006 CR4: 00000000000606e0
Apr 21 11:30:01 pvex kernel: [ 2820.801483] Call Trace:
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  generic_file_splice_read+0x34/0x1c0
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  ? security_file_permission+0xb4/0x110
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  do_splice_to+0x79/0x90
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  splice_direct_to_actor+0xd2/0x230
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  ? do_splice_from+0x30/0x30
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  do_splice_direct+0x98/0xd0
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  do_sendfile+0x1d2/0x3d0
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  __x64_sys_sendfile64+0xa6/0xc0
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  do_syscall_64+0x5a/0x130
Apr 21 11:30:01 pvex kernel: [ 2820.801483]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RIP: 0033:0x7eff6fd7982c
Apr 21 11:30:01 pvex kernel: [ 2820.801483] Code: c3 48 85 ff 74 0c 48 c7 c7 f4 ff ff ff e9 9e eb ff ff b8 0c 00 00 00 0f 05 c3 49 89 ca 48 63 f6 48 63 ff b8 28 00 00 00 0f 05 <48> 89 c7 e9 7e eb ff ff 89 ff 50 b8 7b 00 00 00 0f 05 48 89 c7 e8
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RSP: 002b:00007fff76283fe8 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007eff6fd7982c
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000001
Apr 21 11:30:01 pvex kernel: [ 2820.801483] RBP: 00007fff76284050 R08: 0000000001000000 R09: 0000000000000000
Apr 21 11:30:01 pvex kernel: [ 2820.801483] R10: 0000000001000000 R11: 0000000000000246 R12: 0000000000000001
Apr 21 11:30:01 pvex kernel: [ 2820.801483] R13: 0000000001000000 R14: 0000000000000001 R15: 0000000000000000
Apr 21 11:30:01 pvex kernel: [ 2820.801483] ---[ end trace 6877a458c9350702 ]---

Here is the output of pveversion -v
Code:
proxmox-ve: 6.1-2 (running kernel: 5.3.18-3-pve)
pve-manager: 6.1-8 (running version: 6.1-8/806edfe1)
pve-kernel-helper: 6.1-8
pve-kernel-5.3: 6.1-6
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.3-pve1
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: residual config
ifupdown2: 2.0.1-1+pve8
libjs-extjs: 6.0.1-10
libknet1: 1.15-pve1
libpve-access-control: 6.0-6
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.0-17
libpve-guest-common-perl: 3.0-5
libpve-http-server-perl: 3.0-5
libpve-storage-perl: 6.1-5
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 3.2.1-1
lxcfs: 4.0.1-pve1
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-3
pve-cluster: 6.1-4
pve-container: 3.0-23
pve-docs: 6.1-6
pve-edk2-firmware: 2.20200229-1
pve-firewall: 4.0-10
pve-firmware: 3.0-7
pve-ha-manager: 3.0-9
pve-i18n: 2.0-4
pve-qemu-kvm: 4.1.1-4
pve-xtermjs: 4.3.0-1
qemu-server: 6.1-7
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.3-pve1

I have no idea if it would be a bug on Debian, PVE, LXC, Alpine, or something else on my system (Intel Xeon E5-2689 on a X79 motherboard)...
 
hi,

can you reproduce this all the time? any other side effects you notice?

to be sure, can you post the ct config?
 
Yes, it happens all the time, consistently, across multiple reboots, and also happens with an Alpine 3.10 container, but not with a Debian ct. It took me a while to figure out cat /proc/meminfo was the cause of the kernel warnings. So far I haven't noticed any other effects, both the ct and the host continue working normally, although I am still on a testing phase, so there is not much cpu load or memory use.

Here is the ct config on /etc/pve/nodes/pve1/lxc/101.conf:

Code:
arch: amd64
cores: 4
hostname: sol
memory: 1024
net0: name=eth0,bridge=vmbr0,firewall=1,gw=192.168.0.2,hwaddr=46:F4:48:11:22:33,ip=192.168.0.101/24,tag=1,type=veth
ostype: alpine
rootfs: vmdata:subvol-101-disk-0,size=2G
swap: 1024
unprivileged: 1
 
I can also confirm that the problem no longer happens with an Alpine 3.11 container built from the alpine-3.11-default_20200425.tar.xz template. Should I mark this as solved?
 
hmm, we can mark it solved but still the older template has the problem and i can't reproduce it here, so it would be useful to find out why this happens...

what do you have on your PVE host? can you tell me a little about your setup
 
I don't know exactly what do you mean by "what do you have on your PVE host", but I will describe as best as I can with what might be relevant. It's a Xeon E5-2689 on a chinese X79 motherboard with 32GB of ECC RAM. Proxmox was installed on Debian 10 (buster) 64, with LUKS-encrypted root on LVM on a SATA SSD, EFI boot, intel_iommu=on iommu=pt kernel command-line. Proxmox is running on ext4 root but the containers are on ZFS on a separate SSD (NVME). I am passing-through a PCIe LSI SAS HBA to a VM, which is working fine.

Soon I will install another PVE instance on a different machine so I can test clustering and replication, and then I will be able to see if the behavior can be reproduced there.
 
I don't know exactly what do you mean by "what do you have on your PVE host", but I will describe as best as I can with what might be relevant
was a quite good description, thank you :) looks pretty normal to me except the luks-encryption but that wouldn't cause this issue for sure. iommu could be of interest though.

Soon I will install another PVE instance on a different machine so I can test clustering and replication, and then I will be able to see if the behavior can be reproduced there.
let me know how that goes. we can see if that helps us find the root cause.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!