Proxmox looses SMB/CIFS connection

slize26

Member
Aug 5, 2021
19
4
8
27
Since the latest proxmox update my host is loosing the SMB/CIFS connection to the share with my Vm images. That does instantly freeze all vms running from this share. The storage is provided by TrueNas scale (latest version) which is running stable. The connection between the systems is a 20 Gbit/s LAG group. The proxmox version is: Virtual Environment 7.2-4


May 20 16:08:32 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server
May 20 16:08:32 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server
May 20 16:08:32 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server
May 20 16:08:32 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server
May 20 16:08:32 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server
May 20 16:08:32 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server
May 20 16:08:32 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server
May 20 16:08:32 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server
May 20 16:08:32 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server
May 20 20:28:22 pve1 kernel: CIFS: VFS: \\nas.inf Error -512 sending data on socket to server

1653071529778.png

How can i fix this issue?
 
Last edited:
hi,

which kernel version are you running? check with uname -r or pveversion commands.

you can also test if it helps to pin to an older kernel version and let us know here if it works
 
Thanks for the reply!

Thats my current version:
Code:
root@pve1:~# pveversion
pve-manager/7.2-4/ca9d43cc (running kernel: 5.15.35-1-pve)

I am not sure how to select/pin the system to an older kernel version. Is there an option in the GUI, or has it to be done via the command line?
 
  • Like
Reactions: Tmanok
The VM was able to boot and is running "stable" for ~60 minutes now. So changing the kernel did improve the situation. Thank you! Will this issue with the kernel be fixed in the next release?

Edit: The performance issues i mentioned (now removed) were caused by the missing host cpu flag on the TrueNas VM. I dont know why it got lost, but now its working fine.

Code:
root@pve1:~# proxmox-boot-tool kernel list
Manually selected kernels:
None.

Automatically selected kernels:
5.13.19-6-pve
5.15.35-1-pve




root@pve1:~# proxmox-boot-tool kernel pin 5.13.19-6-pve --next-boot
Setting '5.13.19-6-pve' as grub default entry and running update-grub.
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.15.35-1-pve
Found initrd image: /boot/initrd.img-5.15.35-1-pve
Found linux image: /boot/vmlinuz-5.13.19-6-pve
Found initrd image: /boot/initrd.img-5.13.19-6-pve
Found linux image: /boot/vmlinuz-5.13.19-5-pve
Found initrd image: /boot/initrd.img-5.13.19-5-pve
Found linux image: /boot/vmlinuz-5.13.19-4-pve
Found initrd image: /boot/initrd.img-5.13.19-4-pve
Found linux image: /boot/vmlinuz-5.13.19-3-pve
Found initrd image: /boot/initrd.img-5.13.19-3-pve
Found linux image: /boot/vmlinuz-5.13.19-2-pve
Found initrd image: /boot/initrd.img-5.13.19-2-pve
Found linux image: /boot/vmlinuz-5.13.19-1-pve
Found initrd image: /boot/initrd.img-5.13.19-1-pve
Found linux image: /boot/vmlinuz-5.11.22-7-pve
Found initrd image: /boot/initrd.img-5.11.22-7-pve
Found linux image: /boot/vmlinuz-5.11.22-5-pve
Found initrd image: /boot/initrd.img-5.11.22-5-pve
Found linux image: /boot/vmlinuz-5.11.22-4-pve
Found initrd image: /boot/initrd.img-5.11.22-4-pve
Found linux image: /boot/vmlinuz-5.11.22-3-pve
Found initrd image: /boot/initrd.img-5.11.22-3-pve
Found linux image: /boot/vmlinuz-5.11.22-1-pve
Found initrd image: /boot/initrd.img-5.11.22-1-pve
Found memtest86+ image: /boot/memtest86+.bin
Found memtest86+ multiboot image: /boot/memtest86+_multiboot.bin
done
Pinned for next boot only.




root@pve1:~# pveversion
pve-manager/7.2-4/ca9d43cc (running kernel: 5.13.19-6-pve)
 
Last edited:
+1, I just upgraded and have had huge CIFS issues. Downgraded Kernel thanks to this post and have been solid.

Looking, there were some deprecated security protocols in 5.15, perhaps related?
 
  • Like
Reactions: Tmanok
+1, I also upgraded and it broke all my network-mounted VMs.

Code:
Jul 19 08:20:49 pve1 kernel: [ 1818.227507] CIFS: VFS: \\192.168.19.10 Error -512 sending data on socket to server
Jul 19 08:20:49 pve1 kernel: [ 1818.262925] CIFS: VFS: \\192.168.19.10 Error -512 sending data on socket to server
Jul 19 08:20:49 pve1 kernel: [ 1818.263705] CIFS: reconnect tcon failed rc = -11
Jul 19 08:20:49 pve1 kernel: [ 1818.270835] CIFS: VFS: \\192.168.19.10 Error -512 sending data on socket to server

Code:
root@pve1:~# uname -r
5.15.39-1-pve

Pinned to version 5.13.19-6-pve and problem went away.
 
Still the same issue with the latest version:
Code:
root@pve1:~# pveversion
pve-manager/7.2-7/d0dd0e85 (running kernel: 5.15.39-3-pve)

Is there any ETA for a fix?
 
I had this same problem on a fresh install of proxmox and had no previous kernel to pin to.
CIFS: VFS: \\ADDRESS_TO_SMB_SERVER Error -512 sending data on socket to server
Came across this post and one from reddit. This change seems to have worked for me (96% done as of this writing)
https://www.reddit.com/r/homelab/comments/wk32wu/new_to_proxmox_io_error_on_new_vms/
Change Async IO to native under disks advanced tab.
1661394010785.png
 
I had this same problem on a fresh install of proxmox and had no previous kernel to pin to.
CIFS: VFS: \\ADDRESS_TO_SMB_SERVER Error -512 sending data on socket to server
Came across this post and one from reddit. This change seems to have worked for me (96% done as of this writing)
https://www.reddit.com/r/homelab/comments/wk32wu/new_to_proxmox_io_error_on_new_vms/
Change Async IO to native under disks advanced tab.
View attachment 40347
Thank you very much for the reply! I switched the mode a few days ago and confirm that its working fine now.
 
I had this same problem on a fresh install of proxmox and had no previous kernel to pin to.
CIFS: VFS: \\ADDRESS_TO_SMB_SERVER Error -512 sending data on socket to server
Came across this post and one from reddit. This change seems to have worked for me (96% done as of this writing)
https://www.reddit.com/r/homelab/comments/wk32wu/new_to_proxmox_io_error_on_new_vms/
Change Async IO to native under disks advanced tab.
View attachment 40347

Life saver! Thanks for this tip!
 
Hi everyone, I transferred a vm from an old installation of pve 6.4-15 to a cluster 7.2-11, the vm has two disks on a smb share and after a while it crashed with the message io error, I tried to change async io on native and now, after 3 hours, it seems to work regularly, I think it is appropriate to investigate the thing because it calls for damage
 
  • Like
Reactions: Tmanok
Here some data

root@pve:~# pveversion -v
proxmox-ve: 7.2-1 (running kernel: 5.15.60-2-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-qemu-kvm: 7.0.0-3

syslog
Oct 26 09:05:17 pve kernel: [21795.090179] CIFS: VFS: \\smb Error -512 sending data on socket to server
Oct 26 12:26:05 pve kernel: [33843.415357] CIFS: VFS: \\smb Error -512 sending data on socket to server
Oct 26 12:26:05 pve kernel: [33843.441435] CIFS: VFS: \\smb Error -512 sending data on socket to server
Oct 26 12:26:06 pve kernel: [33843.465384] CIFS: VFS: \\smb Error -512 sending data on socket to server
 
Hi,
Hi everyone, I transferred a vm from an old installation of pve 6.4-15 to a cluster 7.2-11, the vm has two disks on a smb share and after a while it crashed with the message io error, I tried to change async io on native and now, after 3 hours, it seems to work regularly, I think it is appropriate to investigate the thing because it calls for damage
the issue was already investigated, but unfortunately no solution was found yet when discussing the issue with the kernel developers. For Proxmox VE, I sent a patch now to just disable io_uring for CIFS storages, until the issue can be fixed in the kernel.
 
Hi,

the issue was already investigated, but unfortunately no solution was found yet when discussing the issue with the kernel developers. For Proxmox VE, I sent a patch now to just disable io_uring for CIFS storages, until the issue can be fixed in the kernel.
Fiona - was there any further movement on this issue? As far as I can see, it more or less prevents VM disk images from being moved to a CIFS share. Reproduces fairly reliably on pve-manager/7.3-3/c3928077 (running kernel: 5.15.74-1-pve). Thanks!
 
Hi,
Fiona - was there any further movement on this issue? As far as I can see, it more or less prevents VM disk images from being moved to a CIFS share. Reproduces fairly reliably on pve-manager/7.3-3/c3928077 (running kernel: 5.15.74-1-pve). Thanks!
I'm not aware of any upstream fix yet, but if you have qemu-server >= 7.2-6 installed and the disk has not explicitly set Async IO to io_uring (if the setting is default, io_uring will not be used for CIFS), you should not run into the issue.

If you do have different settings and still run into the issue, please share the output of
Code:
pveversion -v
qm config <ID>
cat /proc/fs/cifs/DebugData
cat /etc/pve/storage.cfg # section for the CIFS share is relevant
and describe the error you are seeing in detail.
 
@fiona thank you for your quick response. I am starting to wonder if what I am observing is this same issue or a related but separate issue. My problems arise when I attempt to move a disk of a live VM from ceph to a CIFS mount. The action fails - sometimes immediately, sometimes after copying some data. Syslog reports same Error -512, which is why I ended up in this thread:

Code:
Jan 28 23:25:15 proxmox-intel pvedaemon[2540078]: <root@pam> starting task UPID:proxmox-intel:0026EB13:074BFD57:63D61F5B:qmmove:118:root@pam:
Jan 28 23:25:15 proxmox-intel pvedaemon[2550547]: <root@pam> move disk VM 118: move --disk scsi0 --storage syno-proxmox
Jan 28 23:25:16 proxmox-intel kernel: [1224082.279201] CIFS: VFS: \\10.0.0.5 Error -512 sending data on socket to server
Jan 28 23:25:17 proxmox-intel pvedaemon[2550547]: VM 118 qmp command failed - VM 118 qmp command 'block-job-cancel' failed - Block job 'drive-scsi0' not found
Jan 28 23:25:17 proxmox-intel pvedaemon[2550547]: storage migration failed: block job (mirror) error: drive-scsi0: 'mirror' has been cancelled

pveversion -v:

Code:
proxmox-ve: 7.3-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.3-3 (running version: 7.3-3/c3928077)
pve-kernel-helper: 7.3-1
pve-kernel-5.15: 7.2-14
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph: 17.2.5-pve1
ceph-fuse: 17.2.5-pve1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown2: 3.1.0-1+pmx3
ksm-control-daemon: 1.4-1
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.3
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-1
libpve-guest-common-perl: 4.2-3
libpve-http-server-perl: 4.1-5
libpve-storage-perl: 7.3-1
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.3.1-1
proxmox-backup-file-restore: 2.3.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.0-1
proxmox-widget-toolkit: 3.5.3
pve-cluster: 7.3-1
pve-container: 4.4-2
pve-docs: 7.3-1
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-7
pve-firmware: 3.6-1
pve-ha-manager: 3.5.1
pve-i18n: 2.8-1
pve-qemu-kvm: 7.1.0-4
pve-xtermjs: 4.16.0-1
qemu-server: 7.3-1
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+2
vncterm: 1.7-1
zfsutils-linux: 2.1.6-pve1

cat /proc/fs/cifs/DebugData:
Code:
CIFS Version 2.33
Features: DFS,FSCACHE,SMB_DIRECT,STATS,DEBUG,ALLOW_INSECURE_LEGACY,CIFS_POSIX,UPCALL(SPNEGO),XATTR,ACL,WITNESS
CIFSMaxBufSize: 16384
Active VFS Requests: 0

Servers:
1) ConnectionId: 0x2 Hostname: 10.0.0.5
Number of credits: 8190 Dialect 0x311
TCP status: 1 Instance: 1
Local Users To Server: 1 SecMode: 0x1 Req On Wire: 0
In Send: 0 In MaxReq Wait: 0

        Sessions:
        1) Address: 10.0.0.5 Uses: 1 Capability: 0x300045       Session Status: 1
        Security type: RawNTLMSSP  SessionId: 0xe4c77220
        User: 0 Cred User: 0

        Shares:
        0) IPC: \\10.0.0.5\IPC$ Mounts: 1 DevInfo: 0x0 Attributes: 0x0
        PathComponentMax: 0 Status: 1 type: 0 Serial Number: 0x0
        Share Capabilities: None        Share Flags: 0x0
        tid: 0xaa164ca3 Maximal Access: 0x1f00a9

        1) \\10.0.0.5\proxmox Mounts: 1 DevInfo: 0x20 Attributes: 0x805007f
        PathComponentMax: 255 Status: 1 type: DISK Serial Number: 0xe67bb25d
        Share Capabilities: None Aligned, Partition Aligned,    Share Flags: 0x0
        tid: 0x64d2fab8 Optimal sector size: 0x200      Maximal Access: 0x1f01ff

cat /etc/pve/storage.cfg:
Code:
cifs: syno-proxmox
    path /mnt/pve/syno-proxmox
    server 10.0.0.5
    share proxmox
    content images,backup,vztmpl,rootdir,snippets,iso
    prune-backups keep-all=1
    username proxmox
 
@fiona thank you for your quick response. I am starting to wonder if what I am observing is this same issue or a related but separate issue. My problems arise when I attempt to move a disk of a live VM from ceph to a CIFS mount. The action fails - sometimes immediately, sometimes after copying some data. Syslog reports same Error -512, which is why I ended up in this thread:

Code:
Jan 28 23:25:15 proxmox-intel pvedaemon[2540078]: <root@pam> starting task UPID:proxmox-intel:0026EB13:074BFD57:63D61F5B:qmmove:118:root@pam:
Jan 28 23:25:15 proxmox-intel pvedaemon[2550547]: <root@pam> move disk VM 118: move --disk scsi0 --storage syno-proxmox
Jan 28 23:25:16 proxmox-intel kernel: [1224082.279201] CIFS: VFS: \\10.0.0.5 Error -512 sending data on socket to server
Jan 28 23:25:17 proxmox-intel pvedaemon[2550547]: VM 118 qmp command failed - VM 118 qmp command 'block-job-cancel' failed - Block job 'drive-scsi0' not found
Jan 28 23:25:17 proxmox-intel pvedaemon[2550547]: storage migration failed: block job (mirror) error: drive-scsi0: 'mirror' has been cancelled
Sorry for the delay. I think the issue is that with an online move disk, our check if io_uring should be disabled isn't done :/ The check happens at startup and sets the appropriate option for the -drive commandline for QEMU. We'd need to tell QEMU to turn it off on the target of the move operation, but I'm not sure if there's a way to do that currently, will need to have a closer look.

EDIT: Created a bug report to keep track of the issue https://bugzilla.proxmox.com/show_bug.cgi?id=4525
 
Last edited:
  • Like
Reactions: Tmanok
Hey Guys,

I actually have the same problem, but not with a VM but with a container. I have also had the problem for some time that "bad crc/signature" messages keep appearing in the Proxmox syslog. This is actually due to the fact that the container mounts various file server shares and then loses them.

Container: Ubuntu 20.04.6 on proxmox-ve: 7.3-1 (running kernel: 6.1.2-1-pve)

root@TestContainer:~# dmesg
[1016351.343569] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.344417] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.344420] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.344421] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.344422] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.344427] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.345072] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.345074] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.345075] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.345075] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.345079] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.345519] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.345520] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.345521] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.345522] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.345524] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.345826] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.345827] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.345828] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.345829] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.345830] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.346117] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.346118] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.346119] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.346120] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.346122] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.346393] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.346395] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.346395] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.346396] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.346398] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.346679] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.346683] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.346684] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.346686] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.346689] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.346964] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.346967] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.346969] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.346971] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.346975] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.347318] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.347320] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.347321] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.347322] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.347326] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
[1016351.347677] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.347679] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.347681] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.347683] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.347686] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.347688] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.347690] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.347691] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.347694] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.347695] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.347698] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.347699] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.347702] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.347703] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.347704] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.347705] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.347708] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.347709] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.347710] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.347712] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.347714] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.347715] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.347717] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.347718] 00000030: 00000000 00000000 00000000 00000000 ................
[1016351.347720] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1016351.347721] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1016351.347723] 00000020: 00000000 00000000 00000000 00000000 ................
[1016351.347724] 00000030: 00000000 00000000 00000000 00000000 ................
[1016566.232409] libceph: osd2 (1)10.26.15.50:6888 socket closed (con state OPEN)
[1016751.347292] CIFS: __readahead_batch() returned 3/1024
[1016813.917247] CIFS: __readahead_batch() returned 36/1024
[1016915.274721] CIFS: __readahead_batch() returned 624/1024
[1016919.365776] CIFS: __readahead_batch() returned 320/1024
[1016923.003015] CIFS: __readahead_batch() returned 595/1024
[1016926.848117] CIFS: __readahead_batch() returned 467/1024
[1016930.514902] CIFS: __readahead_batch() returned 798/1024
[1016941.732703] CIFS: __readahead_batch() returned 676/1024
[1017072.537382] CIFS: __readahead_batch() returned 676/1024
[1017119.732143] libceph: osd2 (1)10.26.15.50:6888 socket closed (con state OPEN)
[1017140.806811] CIFS: __readahead_batch() returned 190/1024
[1017147.750476] CIFS: __readahead_batch() returned 160/1024
[1017151.511786] CIFS: __readahead_batch() returned 32/1024
[1017155.065823] CIFS: __readahead_batch() returned 843/1024
[1017162.767514] CIFS: __readahead_batch() returned 676/1024
[1017233.937046] libceph: read_partial_message 00000000f4c76d40 data crc 3384998954 != exp. 3952744336
[1017233.937056] libceph: read_partial_message 00000000c9d36e25 data crc 1459313744 != exp. 2082328904
[1017233.937069] libceph: osd10 (1)10.26.15.52:6804 bad crc/signature
[1017233.937486] libceph: osd9 (1)10.26.15.52:6835 bad crc/signature
[1017234.008573] libceph: read_partial_message 00000000b99ce386 data crc 2146006423 != exp. 94080286
[1017234.008592] libceph: read_partial_message 00000000dc8d021a data crc 1675959117 != exp. 3565934067
[1017234.008942] libceph: osd10 (1)10.26.15.52:6804 bad crc/signature
[1017234.009280] libceph: osd1 (1)10.26.15.50:6810 bad crc/signature
[1017235.084560] libceph: read_partial_message 000000008a4d8d56 data crc 3794373310 != exp. 3796242444
[1017235.084800] libceph: osd1 (1)10.26.15.50:6810 bad crc/signature
[1017235.199896] libceph: read_partial_message 0000000074a0535d data crc 3061441865 != exp. 1489401521
[1017235.199941] libceph: read_partial_message 0000000003a40739 data crc 1365164881 != exp. 238705880
[1017235.200165] libceph: osd3 (1)10.26.15.50:6858 bad crc/signature
[1017235.201002] libceph: osd10 (1)10.26.15.52:6804 bad crc/signature
[1017238.745290] libceph: read_partial_message 0000000070aa279f data crc 1034986088 != exp. 2701370662
[1017238.745559] libceph: osd1 (1)10.26.15.50:6810 bad crc/signature
[1017238.781498] libceph: read_partial_message 00000000ff8fb7dd data crc 2762722003 != exp. 2418349765
[1017238.781505] libceph: read_partial_message 000000002869c2ac data crc 4099414400 != exp. 2039625718
[1017238.781952] libceph: osd2 (1)10.26.15.50:6888 bad crc/signature
[1017238.782261] libceph: osd7 (1)10.26.15.51:6809 bad crc/signature
[1017238.898502] libceph: read_partial_message 00000000ff8fb7dd data crc 2559528101 != exp. 228822481
[1017238.898916] libceph: osd3 (1)10.26.15.50:6858 bad crc/signature
[1017243.300474] libceph: read_partial_message 00000000641bf45d data crc 2796105785 != exp. 472079829
[1017243.300504] libceph: read_partial_message 00000000442bd182 data crc 1721631902 != exp. 4230074893
[1017243.301108] libceph: osd6 (1)10.26.15.51:6839 bad crc/signature
[1017243.301946] libceph: osd0 (1)10.26.15.50:6834 bad crc/signature
[1017244.157516] libceph: read_partial_message 00000000cd2914bc data crc 2139303003 != exp. 971856957
[1017244.157522] libceph: read_partial_message 00000000cb3187d9 data crc 3966178819 != exp. 901871706
[1017244.157827] libceph: osd6 (1)10.26.15.51:6839 bad crc/signature
[1017244.158902] libceph: osd9 (1)10.26.15.52:6835 bad crc/signature
[1017244.447207] libceph: read_partial_message 00000000f3cf65bb data crc 1408972093 != exp. 924660784
[1017244.447219] libceph: read_partial_message 00000000e3f14270 data crc 3335203508 != exp. 1778117778
[1017244.447999] libceph: osd6 (1)10.26.15.51:6839 bad crc/signature
[1017244.449264] libceph: osd7 (1)10.26.15.51:6809 bad crc/signature
[1017286.247718] libceph: osd2 (1)10.26.15.50:6888 socket closed (con state OPEN)
[1017353.848331] CIFS: __readahead_batch() returned 417/1024
[1017400.739167] CIFS: __readahead_batch() returned 807/1024
[1017433.859973] libceph: osd2 (1)10.26.15.50:6888 socket closed (con state OPEN)
[1017435.001857] CIFS: __readahead_batch() returned 673/1024
[1017444.070251] CIFS: __readahead_batch() returned 485/1024
[1017449.026749] CIFS: __readahead_batch() returned 671/1024
[1017480.765560] CIFS: __readahead_batch() returned 27/1024
[1017515.909154] CIFS: __readahead_batch() returned 836/1024
[1017547.569246] CIFS: __readahead_batch() returned 209/1024
[1017550.750279] CIFS: __readahead_batch() returned 268/1024
[1017624.120455] libceph: read_partial_message 0000000067045f34 data crc 1121178867 != exp. 3543028163
[1017624.120777] libceph: osd4 (1)10.26.15.51:6823 bad crc/signature
[1017766.110312] libceph: osd2 (1)10.26.15.50:6888 socket closed (con state OPEN)
[1017922.426403] TCP: request_sock_TCP: Possible SYN flooding on port 8000. Sending cookies. Check SNMP counters.
[1018266.038066] libceph: osd0 (1)10.26.15.50:6834 socket closed (con state OPEN)
[1018306.225506] libceph: osd0 (1)10.26.15.50:6834 socket closed (con state OPEN)
[1018738.060522] cifs_demultiplex_thread: 6 callbacks suppressed
[1018738.060526] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 0
[1018738.061679] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1018738.061681] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1018738.061682] 00000020: 00000000 00000000 00000000 00000000 ................
[1018738.061683] 00000030: 00000000 00000000 00000000 00000000 ................
[1018738.061688] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 0
[1018738.061956] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
[1018738.061957] 00000010: 00000001 00000000 ffffffff ffffffff ................
[1018738.061958] 00000020: 00000000 00000000 00000000 00000000 ................
[1018738.061959] 00000030: 00000000 00000000 00000000 00000000 ................

Mar 16 15:27:34 TestContainer kernel: [1015666.246197] libceph: osd2 (1)10.26.15.50:6888 socket closed (con state OPEN)
Mar 16 15:27:59 TestContainer kernel: [1015691.117903] libceph: read_partial_message 000000004efe0383 data crc 1593899735 != exp. 106228753
Mar 16 15:27:59 TestContainer kernel: [1015691.118711] libceph: osd10 (1)10.26.15.52:6804 bad crc/signature
Mar 16 15:38:59 TestContainer kernel: [1016351.343569] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 2
Mar 16 15:38:59 TestContainer kernel: [1016351.344420] 00000010: 00000001 00000000 ffffffff ffffffff ................
Mar 16 15:38:59 TestContainer kernel: [1016351.344422] 00000030: 00000000 00000000 00000000 00000000 ................
Mar 16 15:38:59 TestContainer kernel: [1016351.345075] 00000020: 00000000 00000000 00000000 00000000 ................
Mar 16 15:38:59 TestContainer kernel: [1016351.345519] 00000000: 424d53fe 00000040 00000000 00000012 .SMB@...........
Mar 16 15:38:59 TestContainer kernel: [1016351.345521] 00000020: 00000000 00000000 00000000 00000000 ................
Mar 16 15:38:59 TestContainer kernel: [1016351.346683] 00000010: 00000001 00000000 ffffffff ffffffff ................
Mar 16 15:38:59 TestContainer kernel: [1016351.346686] 00000030: 00000000 00000000 00000000 00000000 ................
Mar 16 15:42:34 TestContainer kernel: [1016566.232409] libceph: osd2 (1)10.26.15.50:6888 socket closed (con state OPEN)
Mar 16 15:45:39 TestContainer kernel: [1016751.347292] CIFS: __readahead_batch() returned 3/1024
Mar 16 15:48:34 TestContainer kernel: [1016926.848117] CIFS: __readahead_batch() returned 467/1024
Mar 16 15:48:38 TestContainer kernel: [1016930.514902] CIFS: __readahead_batch() returned 798/1024
Mar 16 15:53:41 TestContainer kernel: [1017233.937056] libceph: read_partial_message 00000000c9d36e25 data crc 1459313744 != exp. 2082328904
Mar 16 15:53:41 TestContainer kernel: [1017233.937486] libceph: osd9 (1)10.26.15.52:6835 bad crc/signature
Mar 16 15:53:43 TestContainer kernel: [1017235.084800] libceph: osd1 (1)10.26.15.50:6810 bad crc/signature
Mar 16 15:53:52 TestContainer kernel: [1017244.157522] libceph: read_partial_message 00000000cb3187d9 data crc 3966178819 != exp. 901871706
Mar 16 15:55:41 TestContainer kernel: [1017353.848331] CIFS: __readahead_batch() returned 417/1024
Mar 16 15:57:48 TestContainer kernel: [1017480.765560] CIFS: __readahead_batch() returned 27/1024
Mar 16 15:58:23 TestContainer kernel: [1017515.909154] CIFS: __readahead_batch() returned 836/1024
Mar 16 15:58:58 TestContainer kernel: [1017550.750279] CIFS: __readahead_batch() returned 268/1024
Mar 16 16:18:46 TestContainer kernel: [1018738.060526] CIFS: VFS: \\fileserver No task to wake, unknown frame received! NumMids 0
Mar 16 16:18:46 TestContainer kernel: [1018738.061681] 00000010: 00000001 00000000 ffffffff ffffffff ................
Mar 16 16:23:01 TestContainer CRON[69720]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!