kernel: CIFS: VFS: No writable handle in writepages rc=-9

Allister

New Member
Jul 12, 2023
27
1
3
I'm getting these errors in my log on my 3rd node. It's got a CIFS/SMB connection to a Synology NAS. Can anyone give me any insight on this error? The error is: kernel: CIFS: VFS: No writable handle in writepages rc=-9

It causes the LXC to go down, and the whole node to become a question mark. I can get to the node without any trouble but its status is unknown. Here is a portion of the log:
Code:
07:46:07 pmx-node3 systemd[1]: user@0.service: Deactivated successfully.
Dec 11 07:46:07 pmx-node3 systemd[1]: Stopped user@0.service - User Manager for UID 0.
Dec 11 07:46:07 pmx-node3 systemd[1]: Stopping user-runtime-dir@0.service - User Runtime Directory /run/user/0...
Dec 11 07:46:07 pmx-node3 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 11 07:46:07 pmx-node3 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 11 07:46:07 pmx-node3 systemd[1]: Stopped user-runtime-dir@0.service - User Runtime Directory /run/user/0.
Dec 11 07:46:07 pmx-node3 systemd[1]: Removed slice user-0.slice - User Slice of UID 0.
Dec 11 07:46:07 pmx-node3 systemd[1]: user-0.slice: Consumed 34.424s CPU time.
Dec 11 07:46:09 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:09 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:14 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:14 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:19 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:19 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:24 pmx-node3 kernel: INFO: task unrar:157049 blocked for more than 966 seconds.
Dec 11 07:46:24 pmx-node3 kernel:       Tainted: P           O       6.5.11-7-pve #1
Dec 11 07:46:24 pmx-node3 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 11 07:46:24 pmx-node3 kernel: task:unrar           state:D stack:0     pid:157049 ppid:2023   flags:0x00004006
Dec 11 07:46:24 pmx-node3 kernel: Call Trace:
Dec 11 07:46:24 pmx-node3 kernel:  <TASK>
Dec 11 07:46:24 pmx-node3 kernel:  __schedule+0x3fd/0x1450
Dec 11 07:46:24 pmx-node3 kernel:  ? free_unref_page_commit+0xf1/0x190
Dec 11 07:46:24 pmx-node3 kernel:  schedule+0x63/0x110
Dec 11 07:46:24 pmx-node3 kernel:  io_schedule+0x46/0x80
Dec 11 07:46:24 pmx-node3 kernel:  folio_wait_bit_common+0x136/0x330
Dec 11 07:46:24 pmx-node3 kernel:  ? __pfx_wake_page_function+0x10/0x10
Dec 11 07:46:24 pmx-node3 kernel:  __folio_lock+0x17/0x30
Dec 11 07:46:24 pmx-node3 kernel:  invalidate_inode_pages2_range+0x178/0x460
Dec 11 07:46:24 pmx-node3 kernel:  invalidate_inode_pages2+0x17/0x30
Dec 11 07:46:24 pmx-node3 kernel:  cifs_invalidate_mapping+0x3b/0x80 [cifs]
Dec 11 07:46:24 pmx-node3 kernel:  cifs_revalidate_mapping+0xc2/0xe0 [cifs]
Dec 11 07:46:24 pmx-node3 kernel:  cifs_revalidate_dentry+0x1f/0x30 [cifs]
Dec 11 07:46:24 pmx-node3 kernel:  cifs_d_revalidate+0x5f/0x180 [cifs]
Dec 11 07:46:24 pmx-node3 kernel:  lookup_fast+0x83/0x100
Dec 11 07:46:24 pmx-node3 kernel:  walk_component+0x2c/0x190
Dec 11 07:46:24 pmx-node3 kernel:  path_lookupat+0x67/0x1a0
Dec 11 07:46:24 pmx-node3 kernel:  filename_lookup+0xe4/0x200
Dec 11 07:46:24 pmx-node3 kernel:  user_path_at_empty+0x3e/0x70
Dec 11 07:46:24 pmx-node3 kernel:  do_utimes+0xec/0x160
Dec 11 07:46:24 pmx-node3 kernel:  __x64_sys_utimensat+0x9d/0xf0
Dec 11 07:46:24 pmx-node3 kernel:  do_syscall_64+0x5b/0x90
Dec 11 07:46:24 pmx-node3 kernel:  ? irqentry_exit_to_user_mode+0x17/0x20
Dec 11 07:46:24 pmx-node3 kernel:  ? irqentry_exit+0x43/0x50
Dec 11 07:46:24 pmx-node3 kernel:  ? sysvec_apic_timer_interrupt+0x4b/0xd0
Dec 11 07:46:24 pmx-node3 kernel:  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
Dec 11 07:46:24 pmx-node3 kernel: RIP: 0033:0x7fbb0b01b71f
Dec 11 07:46:24 pmx-node3 kernel: RSP: 002b:00007ffeda930448 EFLAGS: 00000202 ORIG_RAX: 0000000000000118
Dec 11 07:46:24 pmx-node3 kernel: RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fbb0b01b71f
Dec 11 07:46:24 pmx-node3 kernel: RDX: 00007ffeda930450 RSI: 000056518e703dd0 RDI: 00000000ffffff9c
Dec 11 07:46:24 pmx-node3 kernel: RBP: 00007ffeda930480 R08: 0000000000000007 R09: 000056518e808f20
Dec 11 07:46:24 pmx-node3 kernel: R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffeda931f30
Dec 11 07:46:24 pmx-node3 kernel: R13: 00007ffeda930470 R14: 00007ffeda931f30 R15: 00000000005c68d1
Dec 11 07:46:24 pmx-node3 kernel:  </TASK>
Dec 11 07:46:24 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:24 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:29 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:29 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:34 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:34 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:39 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:39 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:44 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:44 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:50 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:50 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:55 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:46:55 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:00 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:00 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:05 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:05 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:10 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:10 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:15 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:15 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:20 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:20 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:25 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:25 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:30 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:30 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:36 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9
Dec 11 07:47:36 pmx-node3 kernel: CIFS: VFS: No writable handle in writepages rc=-9

...and it just keeps spamming the logs with this error. Any help in troubleshooting this would be greatly appreciated!
 
UPDATE: I think I've resolved this problem. My hosts were connecting to the same share multiple times using different credentials for the different LXCs that use them. I simplified it all down to one account for all of them and the error (so far) has not come back. I'm not sure if Proxmox has any issues with doing this but maybe I uncovered a bug with setting it up like that.
 
Well, I thought I solved the problem but I did not. I'm still getting these errors. If anyone knows what is going on with this, I would greatly appreciate any help.
 
I'm also getting similar errors that bring the node to its knees. I would love to figure this out.

This post is very likely related. https://forum.proxmox.com/threads/system-freezes-during-backup.144989/

Code:
2024-04-13T04:05:44.459581-05:00 phpve01 kernel: [117425.065560] eth0: renamed from veth5a5d5e5
2024-04-13T04:05:44.491579-05:00 phpve01 kernel: [117425.097214] br-0e02950d286a: port 35(vethc8dba05) entered blocking state
2024-04-13T04:05:44.491603-05:00 phpve01 kernel: [117425.097219] br-0e02950d286a: port 35(vethc8dba05) entered forwarding state
2024-04-13T04:05:44.559593-05:00 phpve01 kernel: [117425.165863] eth1: renamed from veth2704176
2024-04-13T04:05:44.611570-05:00 phpve01 kernel: [117425.217860] br-eac7ccf10a5a: port 37(vethb3374b9) entered blocking state
2024-04-13T04:05:44.611590-05:00 phpve01 kernel: [117425.217870] br-eac7ccf10a5a: port 37(vethb3374b9) entered forwarding state
2024-04-13T04:05:45.571569-05:00 phpve01 kernel: [117426.179371] br-eac7ccf10a5a: port 38(veth1d49bc7) entered blocking state
2024-04-13T04:05:45.571588-05:00 phpve01 kernel: [117426.179376] br-eac7ccf10a5a: port 38(veth1d49bc7) entered disabled state
2024-04-13T04:05:45.571588-05:00 phpve01 kernel: [117426.179385] veth1d49bc7: entered allmulticast mode
2024-04-13T04:05:45.571590-05:00 phpve01 kernel: [117426.179423] veth1d49bc7: entered promiscuous mode
2024-04-13T04:05:46.351599-05:00 phpve01 kernel: [117426.957273] eth0: renamed from veth07ec874
2024-04-13T04:05:46.395605-05:00 phpve01 kernel: [117427.001389] br-eac7ccf10a5a: port 38(veth1d49bc7) entered blocking state
2024-04-13T04:05:46.395630-05:00 phpve01 kernel: [117427.001394] br-eac7ccf10a5a: port 38(veth1d49bc7) entered forwarding state
2024-04-13T07:47:05.914196-05:00 phpve01 kernel: [130706.522342] CIFS: VFS: \\vmnas01.home sends on sock 00000000c4e784d9 stuck for 15 seconds
2024-04-13T07:47:05.936984-05:00 phpve01 kernel: [130706.522391] CIFS: VFS: \\vmnas01.home Error -11 sending data on socket to server
2024-04-13T08:05:52.111179-05:00 phpve01 kernel: [131832.746214] CIFS: VFS: Send error in read = -512
2024-04-13T08:05:54.075951-05:00 phpve01 kernel: [131834.716116] CIFS: VFS: Send error in read = -512
2024-04-13T08:05:55.943703-05:00 phpve01 kernel: [131836.583662] CIFS: VFS: Send error in read = -512
2024-04-13T08:05:59.783586-05:00 phpve01 kernel: [131840.423083] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:08.519569-05:00 phpve01 kernel: [131849.159395] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:11.523603-05:00 phpve01 kernel: [131852.164478] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:14.631583-05:00 phpve01 kernel: [131855.271500] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:17.631592-05:00 phpve01 kernel: [131858.270929] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:18.931595-05:00 phpve01 kernel: [131859.569945] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:20.687579-05:00 phpve01 kernel: [131861.325879] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:22.803628-05:00 phpve01 kernel: [131863.442075] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:24.095575-05:00 phpve01 kernel: [131864.735125] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:26.055606-05:00 phpve01 kernel: [131866.694775] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:29.387659-05:00 phpve01 kernel: [131870.025562] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:31.047651-05:00 phpve01 kernel: [131871.685758] CIFS: VFS: Send error in read = -512
2024-04-13T08:06:59.755604-05:00 phpve01 kernel: [131900.394480] CIFS: VFS: Send error in read = -512
2024-04-13T08:07:01.239630-05:00 phpve01 kernel: [131901.878015] CIFS: VFS: Send error in read = -512
2024-04-13T08:07:14.383593-05:00 phpve01 kernel: [131915.022068] CIFS: VFS: Send error in read = -512
2024-04-13T08:07:20.723616-05:00 phpve01 kernel: [131921.360765] CIFS: VFS: Send error in read = -512
2024-04-13T08:07:25.071590-05:00 phpve01 kernel: [131925.709235] CIFS: VFS: Send error in read = -512
2024-04-13T08:07:27.819584-05:00 phpve01 kernel: [131928.457608] CIFS: VFS: Send error in read = -512
2024-04-13T08:07:31.307587-05:00 phpve01 kernel: [131931.945572] CIFS: VFS: Send error in read = -512
2024-04-13T08:07:32.643609-05:00 phpve01 kernel: [131933.283303] CIFS: VFS: Send error in read = -512
2024-04-13T08:07:35.411592-05:00 phpve01 kernel: [131936.049446] CIFS: VFS: Send error in read = -512
2024-04-13T08:14:14.563590-05:00 phpve01 kernel: [132335.201389] CIFS: VFS: \\vmnas01.home sends on sock 00000000cfaf78d0 stuck for 15 seconds
2024-04-13T08:14:14.563616-05:00 phpve01 kernel: [132335.201435] CIFS: VFS: \\vmnas01.home Error -11 sending data on socket to server
2024-04-13T08:14:33.503613-05:00 phpve01 kernel: [132354.141464] CIFS: VFS: No writable handle in writepages rc=-9
2024-04-13T08:15:04.223622-05:00 phpve01 kernel: [132384.861541] CIFS: VFS: No writable handle in writepages rc=-9
2024-04-13T08:15:34.943652-05:00 phpve01 kernel: [132415.581539] CIFS: VFS: No writable handle in writepages rc=-9
2024-04-13T08:16:05.667575-05:00 phpve01 kernel: [132446.305570] CIFS: VFS: No writable handle in writepages rc=-9
2024-04-13T08:16:36.383025-05:00 phpve01 kernel: [132477.021718] CIFS: VFS: No writable handle in writepages rc=-9
2024-04-13T08:17:07.103618-05:00 phpve01 kernel: [132507.741696] CIFS: VFS: No writable handle in writepages rc=-9
2024-04-13T08:19:54.784799-05:00 phpve01 kernel: [132675.421357] INFO: task kswapd0:111 blocked for more than 120 seconds.
2024-04-13T08:19:54.784860-05:00 phpve01 kernel: [132675.421399]       Tainted: P           OE      6.5.13-5-pve #1
2024-04-13T08:19:54.784861-05:00 phpve01 kernel: [132675.421413] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
2024-04-13T08:19:54.784862-05:00 phpve01 kernel: [132675.421429] task:kswapd0         state:D stack:0     pid:111   ppid:2      flags:0x00004000
2024-04-13T08:19:54.784866-05:00 phpve01 kernel: [132675.421435] Call Trace:
2024-04-13T08:19:54.784869-05:00 phpve01 kernel: [132675.421438]  <TASK>
2024-04-13T08:19:54.785181-05:00 phpve01 kernel: [132675.421445]  __schedule+0x3fc/0x1440
2024-04-13T08:19:54.785189-05:00 phpve01 kernel: [132675.421458]  ? free_pcppages_bulk+0x20a/0x2c0
2024-04-13T08:19:54.785190-05:00 phpve01 kernel: [132675.421465]  schedule+0x63/0x110
2024-04-13T08:19:54.785190-05:00 phpve01 kernel: [132675.421467]  io_schedule+0x46/0x80
2024-04-13T08:19:54.785190-05:00 phpve01 kernel: [132675.421469]  folio_wait_bit_common+0x136/0x330
2024-04-13T08:19:54.785191-05:00 phpve01 kernel: [132675.421479]  ? __pfx_wake_page_function+0x10/0x10
2024-04-13T08:19:54.785192-05:00 phpve01 kernel: [132675.421481]  __filemap_get_folio+0x1d6/0x230
2024-04-13T08:19:54.785192-05:00 phpve01 kernel: [132675.421488]  truncate_inode_pages_range+0x1bd/0x4c0
2024-04-13T08:19:54.785193-05:00 phpve01 kernel: [132675.421502]  truncate_inode_pages_final+0x40/0x50
2024-04-13T08:19:54.785194-05:00 phpve01 kernel: [132675.421506]  cifs_evict_inode+0x19/0x50 [cifs]
2024-04-13T08:19:54.785196-05:00 phpve01 kernel: [132675.421761]  evict+0xc5/0x1d0
2024-04-13T08:19:54.785196-05:00 phpve01 kernel: [132675.421769]  iput+0x14b/0x260
2024-04-13T08:19:54.785405-05:00 phpve01 kernel: [132675.421772]  dentry_unlink_inode+0xd4/0x150
2024-04-13T08:19:54.785428-05:00 phpve01 kernel: [132675.421774]  __dentry_kill+0xde/0x190
2024-04-13T08:19:54.785430-05:00 phpve01 kernel: [132675.421776]  shrink_dentry_list+0x76/0x150
2024-04-13T08:19:54.785431-05:00 phpve01 kernel: [132675.421778]  prune_dcache_sb+0x59/0x90
2024-04-13T08:19:54.785431-05:00 phpve01 kernel: [132675.421783]  super_cache_scan+0x139/0x210
2024-04-13T08:19:54.785576-05:00 phpve01 kernel: [132675.421809]  do_shrink_slab+0x14e/0x310
2024-04-13T08:19:54.785579-05:00 phpve01 kernel: [132675.421816]  shrink_slab+0x1e8/0x290
2024-04-13T08:19:54.785581-05:00 phpve01 kernel: [132675.421821]  shrink_one+0x13c/0x1e0
2024-04-13T08:19:54.785581-05:00 phpve01 kernel: [132675.421823]  shrink_node+0x9bd/0xc10
2024-04-13T08:19:54.785619-05:00 phpve01 kernel: [132675.421826]  balance_pgdat+0x51e/0x9c0
2024-04-13T08:19:54.785621-05:00 phpve01 kernel: [132675.421828]  ? raw_spin_rq_unlock+0x10/0x40
2024-04-13T08:19:54.785622-05:00 phpve01 kernel: [132675.421831]  ? finish_task_switch.isra.0+0x85/0x2c0
2024-04-13T08:19:54.785622-05:00 phpve01 kernel: [132675.421835]  kswapd+0x1f6/0x3b0
2024-04-13T08:19:54.785624-05:00 phpve01 kernel: [132675.421836]  ? __pfx_autoremove_wake_function+0x10/0x10
2024-04-13T08:19:54.785626-05:00 phpve01 kernel: [132675.421840]  ? __pfx_kswapd+0x10/0x10
2024-04-13T08:19:54.785627-05:00 phpve01 kernel: [132675.421842]  kthread+0xef/0x120
2024-04-13T08:19:54.785628-05:00 phpve01 kernel: [132675.421843]  ? __pfx_kthread+0x10/0x10
2024-04-13T08:19:54.785815-05:00 phpve01 kernel: [132675.421845]  ret_from_fork+0x44/0x70
2024-04-13T08:19:54.785835-05:00 phpve01 kernel: [132675.421849]  ? __pfx_kthread+0x10/0x10
2024-04-13T08:19:54.785836-05:00 phpve01 kernel: [132675.421851]  ret_from_fork_asm+0x1b/0x30
2024-04-13T08:19:54.785837-05:00 phpve01 kernel: [132675.421853]  </TASK>
2024-04-13T08:19:54.785838-05:00 phpve01 kernel: [132675.421863] INFO: task systemd-journal:420 blocked for more than 120 seconds.
2024-04-13T08:19:54.785838-05:00 phpve01 kernel: [132675.421879]       Tainted: P           OE      6.5.13-5-pve #1
2024-04-13T08:19:54.785838-05:00 phpve01 kernel: [132675.421890] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
2024-04-13T08:19:54.785840-05:00 phpve01 kernel: [132675.421903] task:systemd-journal state:D stack:0     pid:420   ppid:1      flags:0x00000002
2024-04-13T08:19:54.785840-05:00 phpve01 kernel: [132675.421906] Call Trace:
2024-04-13T08:19:54.785841-05:00 phpve01 kernel: [132675.421907]  <TASK>
2024-04-13T08:19:54.785842-05:00 phpve01 kernel: [132675.421909]  __schedule+0x3fc/0x1440
2024-04-13T08:19:54.785843-05:00 phpve01 kernel: [132675.421913]  schedule+0x63/0x110
2024-04-13T08:19:54.785843-05:00 phpve01 kernel: [132675.421914]  schedule_preempt_disabled+0x15/0x30
2024-04-13T08:19:54.785844-05:00 phpve01 kernel: [132675.421915]  __mutex_lock.constprop.0+0x3f8/0x7a0
2024-04-13T08:19:54.785845-05:00 phpve01 kernel: [132675.421918]  __mutex_lock_slowpath+0x13/0x20
2024-04-13T08:19:54.785846-05:00 phpve01 kernel: [132675.421920]  mutex_lock+0x3c/0x50
2024-04-13T08:19:54.785850-05:00 phpve01 kernel: [132675.421922]  proc_cgroup_show+0x4c/0x410
2024-04-13T08:19:54.785851-05:00 phpve01 kernel: [132675.421926]  proc_single_show+0x53/0xe0
2024-04-13T08:19:54.785853-05:00 phpve01 kernel: [132675.421930]  seq_read_iter+0x132/0x4a0
2024-04-13T08:19:54.785854-05:00 phpve01 kernel: [132675.421933]  ? _copy_to_user+0x25/0x50
2024-04-13T08:19:54.785855-05:00 phpve01 kernel: [132675.421940]  seq_read+0xcd/0x110
2024-04-13T08:19:54.785855-05:00 phpve01 kernel: [132675.421942]  vfs_read+0xb1/0x360
2024-04-13T08:19:54.785856-05:00 phpve01 kernel: [132675.421945]  ? __seccomp_filter+0x37b/0x560
2024-04-13T08:19:54.785857-05:00 phpve01 kernel: [132675.421950]  ksys_read+0x73/0x100
2024-04-13T08:19:54.785857-05:00 phpve01 kernel: [132675.421952]  __x64_sys_read+0x19/0x30
2024-04-13T08:19:54.785858-05:00 phpve01 kernel: [132675.421954]  do_syscall_64+0x58/0x90
2024-04-13T08:19:54.786229-05:00 phpve01 kernel: [132675.421957]  ? putname+0x5b/0x80
2024-04-13T08:19:54.786235-05:00 phpve01 kernel: [132675.421961]  ? do_sys_openat2+0x9f/0xe0
2024-04-13T08:19:54.786236-05:00 phpve01 kernel: [132675.421965]  ? exit_to_user_mode_prepare+0x39/0x190
2024-04-13T08:19:54.786237-05:00 phpve01 kernel: [132675.421969]  ? syscall_exit_to_user_mode+0x37/0x60
2024-04-13T08:19:54.786238-05:00 phpve01 kernel: [132675.421974]  ? do_syscall_64+0x67/0x90
2024-04-13T08:19:54.786238-05:00 phpve01 kernel: [132675.421975]  ? sysvec_apic_timer_interrupt+0x4b/0xd0
2024-04-13T08:19:54.786250-05:00 phpve01 kernel: [132675.421977]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
2024-04-13T08:19:54.786252-05:00 phpve01 kernel: [132675.421981] RIP: 0033:0x7092b11171dc
2024-04-13T08:19:54.786252-05:00 phpve01 kernel: [132675.422019] RSP: 002b:00007ffdf78155a0 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
2024-04-13T08:19:54.786254-05:00 phpve01 kernel: [132675.422022] RAX: ffffffffffffffda RBX: 0000585527621fc0 RCX: 00007092b11171dc
2024-04-13T08:19:54.786254-05:00 phpve01 kernel: [132675.422023] RDX: 0000000000000400 RSI: 0000585527626bf0 RDI: 0000000000000022
2024-04-13T08:19:54.786255-05:00 phpve01 kernel: [132675.422025] RBP: 00007092b11ee5e0 R08: 0000000000000000 R09: 0000000000000001
2024-04-13T08:19:54.786433-05:00 phpve01 kernel: [132675.422026] R10: 0000000000001000 R11: 0000000000000246 R12: 00007092b0c94208
2024-04-13T08:19:54.786436-05:00 phpve01 kernel: [132675.422027] R13: 0000000000000d68 R14: 00007092b11ed9e0 R15: 0000000000000d68
2024-04-13T08:19:54.786437-05:00 phpve01 kernel: [132675.422030]  </TASK>
 
Have you had any luck? I tried switching network adapters, but I'm still getting this lockup from time to time.
 
I'm seeing this too also with Synology. I tried removing the serverino option that I saw recommened. Currently my mount looks like:

//192.168.0.10/MiscFiles/PVE on /mnt/pve/Synology-miscfiles type cifs (rw,relatime,vers=3.1.1,cache=strict,username=pve,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.0.10,file_mode=0755,dir_mode=0755,soft,nounix,mapposix,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,closetimeo=1)

Curious if anyone else finds a root cause for this.

Code:
[Mon Apr 29 09:20:06 2024] CIFS: VFS: No writable handle in writepages rc=-9
[Mon Apr 29 10:32:21 2024] INFO: task task UPID:pve2::85752 blocked for more than 120 seconds.
[Mon Apr 29 10:32:21 2024]       Tainted: P           O       6.5.13-3-pve #1
[Mon Apr 29 10:32:21 2024] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[Mon Apr 29 10:32:21 2024] task:task UPID:pve2: state:D stack:0     pid:85752 ppid:1131   flags:0x00004006
[Mon Apr 29 10:32:21 2024] Call Trace:
[Mon Apr 29 10:32:21 2024]  <TASK>
[Mon Apr 29 10:32:21 2024]  __schedule+0x3fc/0x1440
[Mon Apr 29 10:32:21 2024]  ? xa_load+0x87/0xf0
[Mon Apr 29 10:32:21 2024]  ? __mod_memcg_lruvec_state+0x58/0xb0
[Mon Apr 29 10:32:21 2024]  schedule+0x63/0x110
[Mon Apr 29 10:32:21 2024]  io_schedule+0x46/0x80
[Mon Apr 29 10:32:21 2024]  folio_wait_bit_common+0x136/0x330
[Mon Apr 29 10:32:21 2024]  ? __pfx_wake_page_function+0x10/0x10
[Mon Apr 29 10:32:21 2024]  __folio_lock+0x17/0x30
[Mon Apr 29 10:32:21 2024]  invalidate_inode_pages2_range+0x178/0x450
[Mon Apr 29 10:32:21 2024]  invalidate_inode_pages2+0x17/0x30
[Mon Apr 29 10:32:21 2024]  cifs_invalidate_mapping+0x3b/0x80 [cifs]
[Mon Apr 29 10:32:21 2024]  cifs_revalidate_mapping+0xc2/0xe0 [cifs]
[Mon Apr 29 10:32:21 2024]  cifs_revalidate_dentry+0x1f/0x30 [cifs]
[Mon Apr 29 10:32:21 2024]  cifs_d_revalidate+0x5f/0x180 [cifs]
[Mon Apr 29 10:32:21 2024]  lookup_fast+0x80/0x100
[Mon Apr 29 10:32:21 2024]  walk_component+0x2c/0x190
[Mon Apr 29 10:32:21 2024]  path_lookupat+0x67/0x1a0
[Mon Apr 29 10:32:21 2024]  filename_lookup+0xe4/0x200
[Mon Apr 29 10:32:21 2024]  vfs_statx+0xa1/0x180
[Mon Apr 29 10:32:21 2024]  vfs_fstatat+0x58/0x80
[Mon Apr 29 10:32:21 2024]  __do_sys_newfstatat+0x44/0x90
[Mon Apr 29 10:32:21 2024]  __x64_sys_newfstatat+0x1c/0x30
[Mon Apr 29 10:32:21 2024]  do_syscall_64+0x58/0x90
[Mon Apr 29 10:32:21 2024]  ? __count_memcg_events+0x65/0xc0
[Mon Apr 29 10:32:21 2024]  ? count_memcg_events.constprop.0+0x2a/0x50
[Mon Apr 29 10:32:21 2024]  ? handle_mm_fault+0xad/0x360
[Mon Apr 29 10:32:21 2024]  ? exit_to_user_mode_prepare+0x39/0x190
[Mon Apr 29 10:32:21 2024]  ? irqentry_exit_to_user_mode+0x17/0x20
[Mon Apr 29 10:32:21 2024]  ? irqentry_exit+0x43/0x50
[Mon Apr 29 10:32:21 2024]  ? exc_page_fault+0x94/0x1b0
[Mon Apr 29 10:32:21 2024]  entry_SYSCALL_64_after_hwframe+0x6e/0xd8
[Mon Apr 29 10:32:21 2024] RIP: 0033:0x7fdacc00075a
[Mon Apr 29 10:32:21 2024] RSP: 002b:00007ffd7c4964c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000106
[Mon Apr 29 10:32:21 2024] RAX: ffffffffffffffda RBX: 00005daaf544e2a0 RCX: 00007fdacc00075a
[Mon Apr 29 10:32:21 2024] RDX: 00005daaf544e4a8 RSI: 00005daafd3c8c50 RDI: 00000000ffffff9c
[Mon Apr 29 10:32:21 2024] RBP: 00005daafd451d70 R08: 0000000000000003 R09: 000000000000010f
[Mon Apr 29 10:32:21 2024] R10: 0000000000000000 R11: 0000000000000246 R12: 00005daafd3c8c50
[Mon Apr 29 10:32:21 2024] R13: 00005daaf449d23b R14: 0000000000000000 R15: 00007fdacbeca738
[Mon Apr 29 10:32:21 2024]  </TASK>
 
Same problem here. I have one lxc on this host that has 2 CIFS connections to my NAS. It will die after only a few hours with these error messages spamming the logs and PVE console. I've updated the host and guest to all latest versions and kernel, but no change unfortunately.

Has anyone found a cause and/or fix for this behavior?
 
Unfortunately, I stopped using containers for now b/c it was locking up my system. The only thing I found was that Ubuntu kernels had a bug that caused this (or very similar) issue: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/2060780

I haven't been able to find any reference to the debian kernel in use causing this issue so I can't say if there's an upgrade that would fix this. Maybe someone else finds something more definitive.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!