kern.log spammed by NFS messages

LeifSec

New Member
Jul 25, 2023
9
1
3
When doing backups since today the `kern.log` is spammed with huge amount several GB - ~ as large as the backup file) of messages

```
2023-08-04T09:59:19.278111+02:00 hv1 kernel: [793097.803800] NFS: write_begin(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat(136249364), 4096@46983004160)
2023-08-04T09:59:19.278113+02:00 hv1 kernel: [793097.803811] NFS: write_end(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat(136249364), 4096@46983004160)
2023-08-04T09:59:19.278115+02:00 hv1 kernel: [793097.803817] NFS: nfs_updatepage(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat 4096@46983004160)
2023-08-04T09:59:19.278117+02:00 hv1 kernel: [793097.803824] NFS: nfs_updatepage returns 0 (isize 46983008256)
2023-08-04T09:59:19.278119+02:00 hv1 kernel: [793097.803829] NFS: write_begin(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat(136249364), 4096@46983008256)
2023-08-04T09:59:19.278121+02:00 hv1 kernel: [793097.803839] NFS: write_end(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat(136249364), 4096@46983008256)
2023-08-04T09:59:19.278123+02:00 hv1 kernel: [793097.803845] NFS: nfs_updatepage(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat 4096@46983008256)
2023-08-04T09:59:19.278161+02:00 hv1 kernel: [793097.803854] NFS: nfs_updatepage returns 0 (isize 46983012352)
2023-08-04T09:59:19.278164+02:00 hv1 kernel: [793097.804166] decode_attr_type: type=00
2023-08-04T09:59:19.278166+02:00 hv1 kernel: [793097.804171] decode_attr_change: change attribute=7263373637269464507
2023-08-04T09:59:19.278168+02:00 hv1 kernel: [793097.804175] decode_attr_size: file size=36468424704
2023-08-04T09:59:19.278170+02:00 hv1 kernel: [793097.804177] decode_attr_fsid: fsid=(0x0/0x0)
2023-08-04T09:59:19.278172+02:00 hv1 kernel: [793097.804180] decode_attr_fileid: fileid=0
2023-08-04T09:59:19.278174+02:00 hv1 kernel: [793097.804182] decode_attr_fs_locations: fs_locations done, error = 0
2023-08-04T09:59:19.278176+02:00 hv1 kernel: [793097.804185] decode_attr_mode: file mode=00
2023-08-04T09:59:19.278179+02:00 hv1 kernel: [793097.804187] decode_attr_nlink: nlink=1
2023-08-04T09:59:19.278181+02:00 hv1 kernel: [793097.804190] decode_attr_rdev: rdev=(0x0:0x0)
2023-08-04T09:59:19.278182+02:00 hv1 kernel: [793097.804193] decode_attr_space_used: space used=36336340992
2023-08-04T09:59:19.278184+02:00 hv1 kernel: [793097.804195] decode_attr_time_access: atime=0
2023-08-04T09:59:19.278186+02:00 hv1 kernel: [793097.804198] decode_attr_time_metadata: ctime=1691135959
2023-08-04T09:59:19.278187+02:00 hv1 kernel: [793097.804200] decode_attr_time_modify: mtime=1691135959
2023-08-04T09:59:19.278189+02:00 hv1 kernel: [793097.804203] decode_attr_mounted_on_fileid: fileid=0
2023-08-04T09:59:19.278191+02:00 hv1 kernel: [793097.804205] decode_getfattr_attrs: xdr returned 0
2023-08-04T09:59:19.278193+02:00 hv1 kernel: [793097.804207] decode_getfattr_generic: xdr returned 0
2023-08-04T09:59:19.278195+02:00 hv1 kernel: [793097.804211] --> nfs4_alloc_slot used_slots=3fffffff highest_used=29 max_slots=30
2023-08-04T09:59:19.278197+02:00 hv1 kernel: [793097.804216] <-- nfs4_alloc_slot used_slots=3fffffff highest_used=29 slotid=4294967295
2023-08-04T09:59:19.278199+02:00 hv1 kernel: [793097.804220] nfs41_sequence_process: Error 0 free the slot
2023-08-04T09:59:19.278201+02:00 hv1 kernel: [793097.804227] NFS: nfs_update_inode(0:73/136249364 fh_crc=0xef461341 ct=2 info=0x7e200)
2023-08-04T09:59:19.278203+02:00 hv1 kernel: [793097.804243] encode_sequence: sessionid=1690873330:3427275370:5:0 seqid=74518 slotid=9 max_slotid=29 cache_this=0
2023-08-04T09:59:19.278254+02:00 hv1 kernel: [793097.804329] NFS: write(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat, 8192@46983012352)
2023-08-04T09:59:19.278257+02:00 hv1 kernel: [793097.804337] NFS: write_begin(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat(136249364), 4096@46983012352)
2023-08-04T09:59:19.278259+02:00 hv1 kernel: [793097.804351] NFS: write_end(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat(136249364), 4096@46983012352)
2023-08-04T09:59:19.278261+02:00 hv1 kernel: [793097.804357] NFS: nfs_updatepage(dump/vzdump-qemu-109-2023_08_04-09_51_32.vma.dat 4096@46983012352)
2023-08-04T09:59:19.278263+02:00 hv1 kernel: [793097.804365] NFS: nfs_updatepage returns 0 (isize 46983016448)
```

where the ` NFS` part is dominant. With cleaning the `kern.log `and `syslog` (`truncate --size 0`) before disk space on `local (hv1)` is running out the backups finally succeeded.


`pveversion -v`
```
proxmox-ve: 8.0.2 (running kernel: 6.2.16-4-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-4
proxmox-kernel-6.2.16-6-pve: 6.2.16-7
proxmox-kernel-6.2: 6.2.16-7
pve-kernel-6.2.16-5-pve: 6.2.16-6
pve-kernel-6.2.16-4-pve: 6.2.16-5
pve-kernel-5.3: 6.1-6
pve-kernel-5.15.108-1-pve: 5.15.108-1
pve-kernel-5.13.19-2-pve: 5.13.19-4
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-3
libknet1: 1.25-pve1
libproxmox-acme-perl: 1.4.6
libproxmox-backup-qemu0: 1.4.0
libproxmox-rs-perl: 0.3.1
libpve-access-control: 8.0.4
libpve-apiclient-perl: 3.3.1
libpve-common-perl: 8.0.7
libpve-guest-common-perl: 5.0.4
libpve-http-server-perl: 5.0.4
libpve-rs-perl: 0.8.5
libpve-storage-perl: 8.0.2
libqb0: 1.0.5-1
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 5.0.2-4
lxcfs: 5.0.3-pve3
novnc-pve: 1.4.0-2
proxmox-backup-client: 3.0.2-1
proxmox-backup-file-restore: 3.0.2-1
proxmox-kernel-helper: 8.0.3
proxmox-mail-forward: 0.2.0
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.2
proxmox-widget-toolkit: 4.0.6
pve-cluster: 8.0.3
pve-container: 5.0.4
pve-docs: 8.0.4
pve-edk2-firmware: 3.20230228-4
pve-firewall: 5.0.3
pve-firmware: 3.7-1
pve-ha-manager: 4.0.2
pve-i18n: 3.0.5
pve-qemu-kvm: 8.0.2-4
pve-xtermjs: 4.16.0-3
qemu-server: 8.0.6
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.1.12-pve1
```
Actually I cannot find real error messages. Also the write speed is OK. In `/etc/vzdump.conf` there is also now `tmpdir: /tmp`.
 
Hey,

it looks like your kernel is just a little verbose. You can check this using cat /proc/sys/kernel/printk, it'll look something like this: 7 4 6 7. The third number is the minimum log level, which means only messages with that level or more severe will be logged. You can change it temporarily with echo "7 4 1 7" > /proc/sys/kernel/printk, for making the change persistent use echo "kernel.printk = 7 4 1 7" >> /etc/sysctl.conf and sysctl -p afterwards.


Log LevelDescription
0 (KERN_EMERG)system is unusable
1 (KERN_ALERT)action must be taken immediately
2 (KERN_CRIT)critical conditions
3 (KERN_ERR)error conditions
4 (KERN_WARNING)warning conditions
5 (KERN_NOTICE)normal but significant condition
6 (KERN_INFO)informational
7 (KERN_DEBUG)debug-level messages
https://www.kernel.org/doc/html/next/core-api/printk-basics.html
 
Hey,

it looks like your kernel is just a little verbose. You can check this using cat /proc/sys/kernel/printk, it'll look something like this: 7 4 6 7. The third number is the minimum log level, which means only messages with that level or more severe will be logged. You can change it temporarily with echo "7 4 1 7" > /proc/sys/kernel/printk, for making the change persistent use echo "kernel.printk = 7 4 1 7" >> /etc/sysctl.conf and sysctl -p afterwards.


Log LevelDescription
0 (KERN_EMERG)system is unusable
1 (KERN_ALERT)action must be taken immediately
2 (KERN_CRIT)critical conditions
3 (KERN_ERR)error conditions
4 (KERN_WARNING)warning conditions
5 (KERN_NOTICE)normal but significant condition
6 (KERN_INFO)informational
7 (KERN_DEBUG)debug-level messages
https://www.kernel.org/doc/html/next/core-api/printk-basics.html
It had already been `4 4 1 7` and even setting it to `3 3 1 3` does not stop it.
Maybe it is similar to https://forum.proxmox.com/threads/log-files-full-of-nfs-messages.82777 - but not related to a certain node.
Also disabling nfs logs does not help: `rpcdebug -m nfsd -c all`+ restart service.

The problem might have appeared after:
- Connection to nfs was temporary broken (because of firewall settings) - but did not appear directly beyond this.
- Updated on the NFS-Server.

I have seen that there have been updates to Proxmox and the Linux kernel - maybe I reboot the machine / restart services on Monday morning and maybe the problem disappears then.
 
Last edited:
Reboot after the Proxmox + kernel (-> 6.2.16-6-pve) update solved the problem - no NFS messages in logs anymore (during backup).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!