PVE Node restarted unexpectedly

Dec 6, 2021
7
0
6
20
Hi Everyone,

We have 2 issues with our Cluster:
- Creating new disk image on newly attached NFS storage not working
- Disabling newly attached NFS storage caused Node / server restart

This is our setup:
Cluster of PVE Nodes
Diskless PVE Nodes booting on ISCSI from Netapp LUNs
VM images stored on NFS shares from Netapp volumes
Nodes are Dell PowerEdge R[46]XXs with 10GBE NICs to SAN in bondig
PVE Manager Version: pve-manager/7.1-10
Kernel Version: Linux 5.13.19-3-pve #1 SMP PVE 5.13.19-7

Details of first issue:
I created a new NFS share on Netapp and added it to the Cluster on the Storage section of web UI.
I waited some seconds for all Nodes report the new Storage online.
After that, I tried to add a new raw disk image on this share to a stopped VM hosted on Node pve2.
The task was started but nothing happens awhile. Waiting about a minute and I stopped the task manually.
Question is what could be the problem with the image creation. This kind of tasks commonly done immediately.
I observe this behavior sometimes for newly mounted NFS. After that I usually done some workarounds like delete and readd the NFS to Cluster, or restart the whole process and recreate the volume on Netapp.

Details of second issue:
This time I chose another workaround. I set Storage disable on web UI. This was a bad choice because the Node restarted without any sign of issue. What could be the problem here?

These are the related logs in syslog:
Code:
Oct  4 11:44:41 pve2 pvedaemon[936905]: <ops@pam> update VM 189: -scsi0 mail_srv:50,format=raw,backup=0
Oct  4 11:44:41 pve2 pvedaemon[936905]: <ops@pam> starting task UPID:pve2:0018F8D6:06B21A27:633C0089:qmconfig:189:ops@pam:
Oct  4 11:45:36 pve2 pvedaemon[1636566]: VM 189 creating disks failed
Oct  4 11:45:36 pve2 pvedaemon[1636566]: unable to create image: received interrupt
Oct  4 11:45:36 pve2 pvedaemon[936905]: <ops@pam> end task UPID:pve2:0018F8D6:06B21A27:633C0089:qmconfig:189:ops@pam: unable to create image: received interrupt

Last couple of raws and the crash happened:
Code:
Oct  4 11:45:45 pve2 corosync[3492]:   [KNET  ] pmtud: Starting PMTUD for host: 2 link: 0
Oct  4 11:45:45 pve2 corosync[3492]:   [KNET  ] udp: detected kernel MTU: 9000
Oct  4 11:45:45 pve2 corosync[3492]:   [KNET  ] pmtud: PMTUD completed for host: 2 link: 0 current link mtu: 8885
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
Oct  4 11:49:56 pve2 systemd-modules-load[835]: Inserted module 'ib_iser'
Oct  4 11:49:56 pve2 kernel: [    0.000000] Linux version 5.13.19-3-pve (build@proxmox) (gcc (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP PVE 5.13.19-7 (Thu, 20 Jan 2022 16:37:56 +0100) ()
...


I cannot find more logs about the case. There is a Dell Lifecycle Log:
SYS1003: System CPU Resetting.
2022-10-04T11:47:31-0500
Log Sequence Number: 1343
Detailed Description:
System is performing a CPU reset because of system power off, power on or a warm reset like CTRL-ALT-DEL.


Any suggestions are welcome and appreciated.

Best Regards,
Attila
 
Last edited:
Are you connecting the NFS server using the same network as Corosync?
Can you provide the complete output of pveversion -v?
And if possible provide the full syslog of pve2 for that day and one other node in the cluster.
 
Hi,

Corosync using 192.168.80.0/24 subnet in VLAN80.
NFS using 192.168.90.0/24 subnet in VLAN90.
VLAN80 and VLAN90 using the 10GBE bonding interface.

We have another bonding interface which contains data VLANs.

pveversion -v:
proxmox-ve: 7.1-1 (running kernel: 5.13.19-3-pve)
pve-manager: 7.1-10 (running version: 7.1-10/6ddebafe)
pve-kernel-helper: 7.1-8
pve-kernel-5.13: 7.1-6
pve-kernel-5.11: 7.0-10
pve-kernel-5.4: 6.4-5
pve-kernel-5.13.19-3-pve: 5.13.19-7
pve-kernel-5.11.22-7-pve: 5.11.22-12
pve-kernel-5.11.22-5-pve: 5.11.22-10
pve-kernel-5.11.22-4-pve: 5.11.22-9
pve-kernel-5.4.128-1-pve: 5.4.128-2
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
libjs-extjs: 7.0.0-1
libknet1: 1.22-pve2
libproxmox-acme-perl: 1.4.1
libproxmox-backup-qemu0: 1.2.0-1
libpve-access-control: 7.1-5
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.1-2
libpve-guest-common-perl: 4.0-3
libpve-http-server-perl: 4.1-1
libpve-storage-perl: 7.0-15
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 4.0.11-1
lxcfs: 4.0.11-pve1
novnc-pve: 1.3.0-1
proxmox-backup-client: 2.1.3-1
proxmox-backup-file-restore: 2.1.3-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.4-5
pve-cluster: 7.1-3
pve-container: 4.1-3
pve-docs: 7.1-2
pve-edk2-firmware: 3.20210831-2
pve-firewall: 4.2-5
pve-firmware: 3.3-4
pve-ha-manager: 3.3-1
pve-i18n: 2.6-2
pve-qemu-kvm: 6.1.0-3
pve-xtermjs: 4.12.0-1
qemu-server: 7.1-4
smartmontools: 7.2-pve2
spiceterm: 3.2-2
swtpm: 0.7.0~rc1+2
vncterm: 1.7-1
zfsutils-linux: 2.1.2-pve1

I found more logs on our syslog server. These logs aren't on pve2 host.
I attached kern log, pve2 log and pve1 log.

Some "keywords" from kern log:
Code:
Oct  4 11:46:04 pve2 kernel: [1123427.540549] watchdog: BUG: soft lockup - CPU#22 stuck for 26s! [qemu-img:1636569]
Oct  4 11:46:04 pve2 kernel: [1123427.546729] CPU: 22 PID: 1636569 Comm: qemu-img Tainted: P           O      5.13.19-3-pve #1
Oct  4 11:46:04 pve2 kernel: [1123427.559001] Call Trace:
Oct  4 11:46:04 pve2 kernel: [1123427.559817]  nfs_layout_nfsv41_files
Oct  4 11:46:10 pve2 kernel: [1123433.010154] NETDEV WATCHDOG: eno2 (bnx2x): transmit queue 5 timed out
Oct  4 11:46:10 pve2 kernel: [1123433.109184] bnx2x: [bnx2x_panic_dump:919(eno2)]begin crash dump -----------------
Oct  4 11:46:10 pve2 kernel: [1123433.466276] [bnx2x_self_test_log:2939(eno2)]WARNING PGLUE_B: Error in master write. Error details register is not 0. (4:0) VQID. (23:21) - PFID. (24) - VF_VALID. (30:25) - VFID.Value is 0x4409
Oct  4 11:46:10 pve2 kernel: [1123433.696525] bond1: (slave eno2): link status down for interface, disabling it in 200 ms

Interfaces eno1 and eno2 are member of the 10G bonding.

BR,
Attila
 

Attachments

  • pve2.kern.log
    98 KB · Views: 4
  • pve1.sys.log
    17.6 KB · Views: 2
  • pve2.sys.log
    12.8 KB · Views: 2
Thank you for the logs!

I'd suggest first updating to the latest PVE version (7.2) and updating both the BIOS and the NIC firmware as well. If the issue still persists, we can then go from there.
 
Hi Mira,

We have a new host, not a member of the Cluster yet. Node has latest firmwares and PVE:
Code:
proxmox-ve: 7.2-1 (running kernel: 5.15.53-1-pve)
pve-manager: 7.2-11 (running version: 7.2-11/b76d3178)
pve-kernel-helper: 7.2-12
pve-kernel-5.15: 7.2-10
pve-kernel-5.15.53-1-pve: 5.15.53-1
ceph-fuse: 14.2.21-1
corosync: 3.1.5-pve2
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: 0.8.36+pve1
ifupdown2: residual config
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve1
libproxmox-acme-perl: 1.4.2
libproxmox-backup-qemu0: 1.3.1-1
libpve-access-control: 7.2-4
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.2-2
libpve-guest-common-perl: 4.1-2
libpve-http-server-perl: 4.1-3
libpve-storage-perl: 7.2-8
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.0-3
lxcfs: 4.0.12-pve1
novnc-pve: 1.3.0-3
proxmox-backup-client: 2.2.6-1
proxmox-backup-file-restore: 2.2.6-1
proxmox-mini-journalreader: 1.3-1
proxmox-widget-toolkit: 3.5.1
pve-cluster: 7.2-2
pve-container: 4.2-2
pve-docs: 7.2-2
pve-edk2-firmware: 3.20220526-1
pve-firewall: 4.2-6
pve-firmware: 3.5-2
pve-ha-manager: 3.4.0
pve-i18n: 2.7-2
pve-qemu-kvm: 7.0.0-3
pve-xtermjs: 4.16.0-1
qemu-server: 7.2-4
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.7.1~bpo11+1
vncterm: 1.7-1
zfsutils-linux: 2.1.5-pve1

Networking is same as pve2. NFS volume of Netapp storage is the same. I also attached it to this Node.

First issue: Creating new disk image on newly attached NFS storage not working - I can reproduce
Second issue: Disabling newly attached NFS storage caused Node / server restart - I can't reproduce

Here are logs and the steps:

Creating new VM:
Code:
Oct 10 15:40:30 pve5 pvedaemon[3672]: <root@pam> starting task UPID:pve5:0011EDAA:030308A8:634420CE:qmcreate:100:root@pam:
Oct 10 15:40:30 pve5 pvedaemon[3672]: <root@pam> end task UPID:pve5:0011EDAA:030308A8:634420CE:qmcreate:100:root@pam: OK

Added new NFS Storage on WebUI:
Code:
Oct 10 15:42:54 pve5 kernel: [505454.857179] FS-Cache: Loaded
Oct 10 15:42:55 pve5 kernel: [505454.980670] FS-Cache: Netfs 'nfs' registered for caching
Oct 10 15:42:55 pve5 kernel: [505455.382565] NFS: Registering the id_resolver key type
Oct 10 15:42:55 pve5 kernel: [505455.382583] Key type id_resolver registered
Oct 10 15:42:55 pve5 kernel: [505455.382586] Key type id_legacy registered
Oct 10 15:42:55 pve5 nfsidmap[1175338]: nss_getpwnam: name 'root@open' does not map into domain 'localdomain'
Oct 10 15:42:55 pve5 kernel: [505455.690648] nfs4filelayout_init: NFSv4 File Layout Driver Registering...

Added new raw disk from NFS volume:
Code:
Oct 10 15:43:24 pve5 pvedaemon[3671]: <root@pam> update VM 100: -scsi0 mail_srv:5,format=raw
Oct 10 15:43:24 pve5 pvedaemon[3671]: <root@pam> starting task UPID:pve5:0011EF8A:03034CB6:6344217C:qmconfig:100:root@pam:

After 2,5mins I stopped the task manually:
Code:
Oct 10 15:45:52 pve5 pvedaemon[1175434]: VM 100 creating disks failed
Oct 10 15:45:52 pve5 pvedaemon[1175434]: received interrupt
Oct 10 15:45:52 pve5 pvedaemon[3671]: <root@pam> end task UPID:pve5:0011EF8A:03034CB6:6344217C:qmconfig:100:root@pam: received interrupt

Disa/ena NFS storage on WebUI.

Added new raw disk from NFS volume and stopped the task after 1,5 mins:
Code:
Oct 10 15:49:45 pve5 pvedaemon[3671]: <root@pam> update VM 100: -scsi0 mail_srv:5,format=raw
Oct 10 15:49:45 pve5 pvedaemon[3671]: <root@pam> starting task UPID:pve5:0011F373:0303E1B2:634422F9:qmconfig:100:root@pam:
Oct 10 15:51:19 pve5 pvedaemon[1176435]: VM 100 creating disks failed
Oct 10 15:51:19 pve5 pvedaemon[1176435]: received interrupt
Oct 10 15:51:19 pve5 pvedaemon[3671]: <root@pam> end task UPID:pve5:0011F373:0303E1B2:634422F9:qmconfig:100:root@pam: received interrupt

Disa/ena NFS storage on WebUI.

Added new raw disk from local storage, done immediately:
Code:
Oct 10 15:52:37 pve5 pvedaemon[3671]: <root@pam> update VM 100: -scsi0 local:5,format=raw
Oct 10 15:52:37 pve5 pvedaemon[3671]: <root@pam> starting task UPID:pve5:0011F550:030424AC:634423A5:qmconfig:100:root@pam:
Oct 10 15:52:37 pve5 pvedaemon[3671]: <root@pam> end task UPID:pve5:0011F550:030424AC:634423A5:qmconfig:100:root@pam: OK

Added new raw disk (size: 1GB) from NFS volume and stopped the task after 2 mins:
Code:
Oct 10 15:52:56 pve5 pvedaemon[3672]: <root@pam> update VM 100: -scsi1 mail_srv:1,format=raw
Oct 10 15:52:56 pve5 pvedaemon[3672]: <root@pam> starting task UPID:pve5:0011F589:03042C08:634423B8:qmconfig:100:root@pam:
Oct 10 15:54:44 pve5 pvedaemon[1176969]: VM 100 creating disks failed
Oct 10 15:54:44 pve5 pvedaemon[1176969]: received interrupt
Oct 10 15:54:44 pve5 pvedaemon[3672]: <root@pam> end task UPID:pve5:0011F589:03042C08:634423B8:qmconfig:100:root@pam: received interrupt

Added new qcow2 disk from NFS volume and it couldn't create that either:
Code:
Oct 10 16:24:04 pve5 pvedaemon[3672]: <root@pam> update VM 100: -scsi1 mail_srv:5,format=qcow2,backup=0
Oct 10 16:24:04 pve5 pvedaemon[3672]: <root@pam> starting task UPID:pve5:001208DB:030705F8:63442B04:qmconfig:100:root@pam:
Oct 10 16:24:47 pve5 pvedaemon[1181915]: VM 100 creating disks failed
Oct 10 16:24:47 pve5 pvedaemon[1181915]: unable to create image: received interrupt
Oct 10 16:24:47 pve5 pvedaemon[3672]: <root@pam> end task UPID:pve5:001208DB:030705F8:63442B04:qmconfig:100:root@pam: unable to create image: received interrupt

This is the task detail for raw image:
Code:
Formatting '/mnt/pve/mail_srv/images/100/vm-100-disk-0.raw', fmt=raw size=5368709120 preallocation=off
This is the task detail for qcow2 image:
Code:
Formatting '/mnt/pve/mail_srv/images/100/vm-100-disk-0.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=5368709120 lazy_refcounts=off refcount_bits=16

These are the mount options:
Code:
192.168.90.202:/mail_srv on /mnt/pve/mail_srv type nfs4 (rw,relatime,vers=4.2,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.90.95,local_lock=none,addr=192.168.90.202)

Do you have any idea what is the problem?

Thanks,
Attila
 
Meanwhile, I forgot about a disk creating, and it gone time out after 6 mins.

There came some Traces during that:

Code:
Oct 10 16:29:08 pve5 pvedaemon[3671]: <root@pam> update VM 100: -scsi1 mail_srv:5,format=raw,backup=0
Oct 10 16:29:08 pve5 pvedaemon[3671]: <root@pam> starting task UPID:pve5:00120C25:03077C88:63442C34:qmconfig:100:root@pam:
Oct 10 16:33:08 pve5 kernel: [508468.343512] INFO: task kworker/u113:0:1179144 blocked for more than 120 seconds.
Oct 10 16:33:08 pve5 kernel: [508468.343583]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 16:33:08 pve5 kernel: [508468.343627] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 16:33:08 pve5 kernel: [508468.343681] task:kworker/u113:0  state:D stack:    0 pid:1179144 ppid:     2 flags:0x00004000
Oct 10 16:33:08 pve5 kernel: [508468.343690] Workqueue: events_unbound io_ring_exit_work
Oct 10 16:33:08 pve5 kernel: [508468.343705] Call Trace:
Oct 10 16:33:08 pve5 kernel: [508468.343708]  <TASK>
Oct 10 16:33:08 pve5 kernel: [508468.343714]  __schedule+0x33d/0x1750
Oct 10 16:33:08 pve5 kernel: [508468.343722]  ? enqueue_entity+0x17d/0x760
Oct 10 16:33:08 pve5 kernel: [508468.343733]  ? enqueue_task_fair+0x189/0x6a0
Oct 10 16:33:08 pve5 kernel: [508468.343740]  schedule+0x4e/0xc0
Oct 10 16:33:08 pve5 kernel: [508468.343744]  schedule_timeout+0x103/0x140
Oct 10 16:33:08 pve5 kernel: [508468.343751]  ? ttwu_do_activate+0x72/0xf0
Oct 10 16:33:08 pve5 kernel: [508468.343761]  __wait_for_common+0xae/0x150
Oct 10 16:33:08 pve5 kernel: [508468.343766]  ? usleep_range_state+0x90/0x90
Oct 10 16:33:08 pve5 kernel: [508468.343771]  wait_for_completion+0x24/0x30
Oct 10 16:33:08 pve5 kernel: [508468.343775]  io_ring_exit_work+0x194/0x6f0
Oct 10 16:33:08 pve5 kernel: [508468.343781]  ? __schedule+0x345/0x1750
Oct 10 16:33:08 pve5 kernel: [508468.343784]  ? io_uring_del_tctx_node+0xc0/0xc0
Oct 10 16:33:08 pve5 kernel: [508468.343795]  process_one_work+0x22b/0x3d0
Oct 10 16:33:08 pve5 kernel: [508468.343802]  worker_thread+0x53/0x420
Oct 10 16:33:08 pve5 kernel: [508468.343806]  ? process_one_work+0x3d0/0x3d0
Oct 10 16:33:08 pve5 kernel: [508468.343810]  kthread+0x12a/0x150
Oct 10 16:33:08 pve5 kernel: [508468.343819]  ? set_kthread_struct+0x50/0x50
Oct 10 16:33:08 pve5 kernel: [508468.343826]  ret_from_fork+0x22/0x30
Oct 10 16:33:08 pve5 kernel: [508468.343838]  </TASK>
Oct 10 16:33:08 pve5 kernel: [508468.343841] INFO: task kworker/u113:1:1181485 blocked for more than 120 seconds.
Oct 10 16:33:08 pve5 kernel: [508468.343894]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 16:33:08 pve5 kernel: [508468.343936] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 16:33:08 pve5 kernel: [508468.343989] task:kworker/u113:1  state:D stack:    0 pid:1181485 ppid:     2 flags:0x00004000
Oct 10 16:33:08 pve5 kernel: [508468.343996] Workqueue: events_unbound io_ring_exit_work
Oct 10 16:33:08 pve5 kernel: [508468.344002] Call Trace:
Oct 10 16:33:08 pve5 kernel: [508468.344003]  <TASK>
Oct 10 16:33:08 pve5 kernel: [508468.344006]  __schedule+0x33d/0x1750
Oct 10 16:33:08 pve5 kernel: [508468.344010]  ? enqueue_entity+0x17d/0x760
Oct 10 16:33:08 pve5 kernel: [508468.344018]  ? enqueue_task_fair+0x189/0x6a0
Oct 10 16:33:08 pve5 kernel: [508468.344024]  schedule+0x4e/0xc0
Oct 10 16:33:08 pve5 kernel: [508468.344027]  schedule_timeout+0x103/0x140
Oct 10 16:33:08 pve5 kernel: [508468.344033]  ? ttwu_do_activate+0x72/0xf0
Oct 10 16:33:08 pve5 kernel: [508468.344041]  __wait_for_common+0xae/0x150
Oct 10 16:33:08 pve5 kernel: [508468.344045]  ? usleep_range_state+0x90/0x90
Oct 10 16:33:08 pve5 kernel: [508468.344051]  wait_for_completion+0x24/0x30
Oct 10 16:33:08 pve5 kernel: [508468.344055]  io_ring_exit_work+0x194/0x6f0
Oct 10 16:33:08 pve5 kernel: [508468.344060]  ? __schedule+0x345/0x1750
Oct 10 16:33:08 pve5 kernel: [508468.344064]  ? io_uring_del_tctx_node+0xc0/0xc0
Oct 10 16:33:08 pve5 kernel: [508468.344072]  process_one_work+0x22b/0x3d0
Oct 10 16:33:08 pve5 kernel: [508468.344078]  worker_thread+0x53/0x420
Oct 10 16:33:08 pve5 kernel: [508468.344082]  ? process_one_work+0x3d0/0x3d0
Oct 10 16:33:08 pve5 kernel: [508468.344086]  kthread+0x12a/0x150
Oct 10 16:33:08 pve5 kernel: [508468.344093]  ? set_kthread_struct+0x50/0x50
Oct 10 16:33:08 pve5 kernel: [508468.344099]  ret_from_fork+0x22/0x30
Oct 10 16:33:08 pve5 kernel: [508468.344107]  </TASK>
Oct 10 16:33:08 pve5 kernel: [508468.344111] INFO: task task UPID:pve5::1182757 blocked for more than 120 seconds.
Oct 10 16:33:08 pve5 kernel: [508468.344163]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 16:33:08 pve5 kernel: [508468.344205] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 16:33:08 pve5 kernel: [508468.344258] task:task UPID:pve5: state:D stack:    0 pid:1182757 ppid:  3671 flags:0x00004002
Oct 10 16:33:08 pve5 kernel: [508468.344263] Call Trace:
Oct 10 16:33:08 pve5 kernel: [508468.344265]  <TASK>
Oct 10 16:33:08 pve5 kernel: [508468.344267]  __schedule+0x33d/0x1750
Oct 10 16:33:08 pve5 kernel: [508468.344275]  ? nfs4_have_delegation+0x28/0x70 [nfsv4]
Oct 10 16:33:08 pve5 kernel: [508468.344369]  ? nfs_attribute_cache_expired+0x33/0x90 [nfs]
Oct 10 16:33:08 pve5 kernel: [508468.344416]  ? nfs4_have_delegation+0x28/0x70 [nfsv4]
Oct 10 16:33:08 pve5 kernel: [508468.344469]  ? __nfs_revalidate_inode+0x2a2/0x340 [nfs]
Oct 10 16:33:08 pve5 kernel: [508468.344503]  schedule+0x4e/0xc0
Oct 10 16:33:08 pve5 kernel: [508468.344508]  io_schedule+0x46/0x80
Oct 10 16:33:08 pve5 kernel: [508468.344511]  wait_on_page_bit_common+0x114/0x3e0
Oct 10 16:33:08 pve5 kernel: [508468.344520]  ? filemap_invalidate_unlock_two+0x50/0x50
Oct 10 16:33:08 pve5 kernel: [508468.344527]  wait_on_page_bit+0x3f/0x50
Oct 10 16:33:08 pve5 kernel: [508468.344533]  wait_on_page_writeback+0x26/0x80
Oct 10 16:33:08 pve5 kernel: [508468.344539]  __filemap_fdatawait_range+0x97/0x120
Oct 10 16:33:08 pve5 kernel: [508468.344544]  ? __do_compat_sys_newfstatat+0x61/0x70
Oct 10 16:33:08 pve5 kernel: [508468.344551]  ? path_lookupat+0xae/0x1c0
Oct 10 16:33:08 pve5 kernel: [508468.344558]  ? filename_lookup+0xcb/0x1d0
Oct 10 16:33:08 pve5 kernel: [508468.344562]  filemap_write_and_wait_range+0x88/0xe0
Oct 10 16:33:08 pve5 kernel: [508468.344570]  nfs_getattr+0x407/0x420 [nfs]
Oct 10 16:33:08 pve5 kernel: [508468.344602]  vfs_getattr_nosec+0xbd/0xe0
Oct 10 16:33:08 pve5 kernel: [508468.344607]  vfs_statx+0x9d/0x130
Oct 10 16:33:08 pve5 kernel: [508468.344612]  __do_sys_newlstat+0x3e/0x80
Oct 10 16:33:08 pve5 kernel: [508468.344617]  __x64_sys_newlstat+0x16/0x20
Oct 10 16:33:08 pve5 kernel: [508468.344621]  do_syscall_64+0x5c/0xc0
Oct 10 16:33:08 pve5 kernel: [508468.344626]  ? exc_page_fault+0x89/0x170
Oct 10 16:33:08 pve5 kernel: [508468.344631]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
Oct 10 16:33:08 pve5 kernel: [508468.344637] RIP: 0033:0x7fc57359de06
Oct 10 16:33:08 pve5 kernel: [508468.344642] RSP: 002b:00007ffe0096eaa8 EFLAGS: 00000246 ORIG_RAX: 0000000000000006
Oct 10 16:33:08 pve5 kernel: [508468.344647] RAX: ffffffffffffffda RBX: 0000559c33dfe238 RCX: 00007fc57359de06
Oct 10 16:33:08 pve5 kernel: [508468.344650] RDX: 00007ffe0096eae0 RSI: 00007ffe0096eae0 RDI: 0000559c30e6cef0
Oct 10 16:33:08 pve5 kernel: [508468.344653] RBP: 0000559c2cacf2a0 R08: 0000000000000001 R09: 0000000000000111
Oct 10 16:33:08 pve5 kernel: [508468.344655] R10: 00000000000045e7 R11: 0000000000000246 R12: 0000000000000001
Oct 10 16:33:08 pve5 kernel: [508468.344658] R13: 0000559c30e6cef0 R14: 0000559c33dfe240 R15: 00007ffe0096ead8
Oct 10 16:33:08 pve5 kernel: [508468.344663]  </TASK>
Oct 10 16:34:20 pve5 nfsidmap[1183617]: nss_getpwnam: name 'root@open' does not map into domain 'localdomain'
Oct 10 16:34:27 pve5 pvedaemon[3671]: worker exit
Oct 10 16:34:27 pve5 pvedaemon[3669]: worker 3671 finished
Oct 10 16:34:27 pve5 pvedaemon[3669]: starting 1 worker(s)
Oct 10 16:34:27 pve5 pvedaemon[3669]: worker 1183641 started
Oct 10 16:35:09 pve5 kernel: [508589.176843] INFO: task kworker/u113:0:1179144 blocked for more than 241 seconds.
Oct 10 16:35:09 pve5 kernel: [508589.176914]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 16:35:09 pve5 kernel: [508589.176958] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 16:35:09 pve5 kernel: [508589.177012] task:kworker/u113:0  state:D stack:    0 pid:1179144 ppid:     2 flags:0x00004000
Oct 10 16:35:09 pve5 kernel: [508589.177022] Workqueue: events_unbound io_ring_exit_work
Oct 10 16:35:09 pve5 kernel: [508589.177036] Call Trace:
Oct 10 16:35:09 pve5 kernel: [508589.177039]  <TASK>
Oct 10 16:35:09 pve5 kernel: [508589.177044]  __schedule+0x33d/0x1750
Oct 10 16:35:09 pve5 kernel: [508589.177052]  ? enqueue_entity+0x17d/0x760
Oct 10 16:35:09 pve5 kernel: [508589.177062]  ? enqueue_task_fair+0x189/0x6a0
Oct 10 16:35:09 pve5 kernel: [508589.177069]  schedule+0x4e/0xc0
Oct 10 16:35:09 pve5 kernel: [508589.177073]  schedule_timeout+0x103/0x140
Oct 10 16:35:09 pve5 kernel: [508589.177080]  ? ttwu_do_activate+0x72/0xf0
Oct 10 16:35:09 pve5 kernel: [508589.177090]  __wait_for_common+0xae/0x150
Oct 10 16:35:09 pve5 kernel: [508589.177094]  ? usleep_range_state+0x90/0x90
Oct 10 16:35:09 pve5 kernel: [508589.177100]  wait_for_completion+0x24/0x30
Oct 10 16:35:09 pve5 kernel: [508589.177104]  io_ring_exit_work+0x194/0x6f0
Oct 10 16:35:09 pve5 kernel: [508589.177109]  ? __schedule+0x345/0x1750
Oct 10 16:35:09 pve5 kernel: [508589.177113]  ? io_uring_del_tctx_node+0xc0/0xc0
Oct 10 16:35:09 pve5 kernel: [508589.177123]  process_one_work+0x22b/0x3d0
Oct 10 16:35:09 pve5 kernel: [508589.177130]  worker_thread+0x53/0x420
Oct 10 16:35:09 pve5 kernel: [508589.177134]  ? process_one_work+0x3d0/0x3d0
Oct 10 16:35:09 pve5 kernel: [508589.177138]  kthread+0x12a/0x150
Oct 10 16:35:09 pve5 kernel: [508589.177147]  ? set_kthread_struct+0x50/0x50
Oct 10 16:35:09 pve5 kernel: [508589.177153]  ret_from_fork+0x22/0x30
Oct 10 16:35:09 pve5 kernel: [508589.177165]  </TASK>
Oct 10 16:35:09 pve5 kernel: [508589.177167] INFO: task kworker/u113:1:1181485 blocked for more than 241 seconds.
Oct 10 16:35:09 pve5 kernel: [508589.177222]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 16:35:09 pve5 kernel: [508589.177264] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 16:35:09 pve5 kernel: [508589.177316] task:kworker/u113:1  state:D stack:    0 pid:1181485 ppid:     2 flags:0x00004000
Oct 10 16:35:09 pve5 kernel: [508589.177322] Workqueue: events_unbound io_ring_exit_work
Oct 10 16:35:09 pve5 kernel: [508589.177328] Call Trace:
Oct 10 16:35:09 pve5 kernel: [508589.177330]  <TASK>
Oct 10 16:35:09 pve5 kernel: [508589.177332]  __schedule+0x33d/0x1750
Oct 10 16:35:09 pve5 kernel: [508589.177337]  ? enqueue_entity+0x17d/0x760
Oct 10 16:35:09 pve5 kernel: [508589.177344]  ? enqueue_task_fair+0x189/0x6a0
Oct 10 16:35:09 pve5 kernel: [508589.177351]  schedule+0x4e/0xc0
Oct 10 16:35:09 pve5 kernel: [508589.177354]  schedule_timeout+0x103/0x140
Oct 10 16:35:09 pve5 kernel: [508589.177359]  ? ttwu_do_activate+0x72/0xf0
Oct 10 16:35:09 pve5 kernel: [508589.177366]  __wait_for_common+0xae/0x150
Oct 10 16:35:09 pve5 kernel: [508589.177371]  ? usleep_range_state+0x90/0x90
Oct 10 16:35:09 pve5 kernel: [508589.177376]  wait_for_completion+0x24/0x30
Oct 10 16:35:09 pve5 kernel: [508589.177380]  io_ring_exit_work+0x194/0x6f0
Oct 10 16:35:09 pve5 kernel: [508589.177385]  ? __schedule+0x345/0x1750
Oct 10 16:35:09 pve5 kernel: [508589.177389]  ? io_uring_del_tctx_node+0xc0/0xc0
Oct 10 16:35:09 pve5 kernel: [508589.177397]  process_one_work+0x22b/0x3d0
Oct 10 16:35:09 pve5 kernel: [508589.177403]  worker_thread+0x53/0x420
Oct 10 16:35:09 pve5 kernel: [508589.177407]  ? process_one_work+0x3d0/0x3d0
Oct 10 16:35:09 pve5 kernel: [508589.177411]  kthread+0x12a/0x150
Oct 10 16:35:09 pve5 kernel: [508589.177419]  ? set_kthread_struct+0x50/0x50
Oct 10 16:35:09 pve5 kernel: [508589.177425]  ret_from_fork+0x22/0x30
Oct 10 16:35:09 pve5 kernel: [508589.177434]  </TASK>
Oct 10 16:35:09 pve5 kernel: [508589.177437] INFO: task task UPID:pve5::1182757 blocked for more than 241 seconds.
Oct 10 16:35:09 pve5 kernel: [508589.177489]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 16:35:09 pve5 kernel: [508589.177531] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 16:35:09 pve5 kernel: [508589.177583] task:task UPID:pve5: state:D stack:    0 pid:1182757 ppid:     1 flags:0x00004002
Oct 10 16:35:09 pve5 kernel: [508589.177588] Call Trace:
Oct 10 16:35:09 pve5 kernel: [508589.177589]  <TASK>
Oct 10 16:35:09 pve5 kernel: [508589.177592]  __schedule+0x33d/0x1750
Oct 10 16:35:09 pve5 kernel: [508589.177600]  ? nfs4_have_delegation+0x28/0x70 [nfsv4]
Oct 10 16:35:09 pve5 kernel: [508589.177695]  ? nfs_attribute_cache_expired+0x33/0x90 [nfs]
Oct 10 16:35:09 pve5 kernel: [508589.177743]  ? nfs4_have_delegation+0x28/0x70 [nfsv4]
Oct 10 16:35:09 pve5 kernel: [508589.177794]  ? __nfs_revalidate_inode+0x2a2/0x340 [nfs]
Oct 10 16:35:09 pve5 kernel: [508589.177829]  schedule+0x4e/0xc0
Oct 10 16:35:09 pve5 kernel: [508589.177833]  io_schedule+0x46/0x80
Oct 10 16:35:09 pve5 kernel: [508589.177836]  wait_on_page_bit_common+0x114/0x3e0
Oct 10 16:35:09 pve5 kernel: [508589.177846]  ? filemap_invalidate_unlock_two+0x50/0x50
Oct 10 16:35:09 pve5 kernel: [508589.177853]  wait_on_page_bit+0x3f/0x50
Oct 10 16:35:09 pve5 kernel: [508589.177859]  wait_on_page_writeback+0x26/0x80
Oct 10 16:35:09 pve5 kernel: [508589.177864]  __filemap_fdatawait_range+0x97/0x120
Oct 10 16:35:09 pve5 kernel: [508589.177871]  ? __do_compat_sys_newfstatat+0x61/0x70
Oct 10 16:35:09 pve5 kernel: [508589.177877]  ? path_lookupat+0xae/0x1c0
Oct 10 16:35:09 pve5 kernel: [508589.177884]  ? filename_lookup+0xcb/0x1d0
Oct 10 16:35:09 pve5 kernel: [508589.177889]  filemap_write_and_wait_range+0x88/0xe0
Oct 10 16:35:09 pve5 kernel: [508589.177896]  nfs_getattr+0x407/0x420 [nfs]
Oct 10 16:35:09 pve5 kernel: [508589.177929]  vfs_getattr_nosec+0xbd/0xe0
Oct 10 16:35:09 pve5 kernel: [508589.177934]  vfs_statx+0x9d/0x130
Oct 10 16:35:09 pve5 kernel: [508589.177938]  __do_sys_newlstat+0x3e/0x80
Oct 10 16:35:09 pve5 kernel: [508589.177944]  __x64_sys_newlstat+0x16/0x20
Oct 10 16:35:09 pve5 kernel: [508589.177948]  do_syscall_64+0x5c/0xc0
Oct 10 16:35:09 pve5 kernel: [508589.177952]  ? exc_page_fault+0x89/0x170
Oct 10 16:35:09 pve5 kernel: [508589.177958]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
Oct 10 16:35:09 pve5 kernel: [508589.177964] RIP: 0033:0x7fc57359de06
Oct 10 16:35:09 pve5 kernel: [508589.177969] RSP: 002b:00007ffe0096eaa8 EFLAGS: 00000246 ORIG_RAX: 0000000000000006
Oct 10 16:35:09 pve5 kernel: [508589.177974] RAX: ffffffffffffffda RBX: 0000559c33dfe238 RCX: 00007fc57359de06
Oct 10 16:35:09 pve5 kernel: [508589.177977] RDX: 00007ffe0096eae0 RSI: 00007ffe0096eae0 RDI: 0000559c30e6cef0
Oct 10 16:35:09 pve5 kernel: [508589.177979] RBP: 0000559c2cacf2a0 R08: 0000000000000001 R09: 0000000000000111
Oct 10 16:35:09 pve5 kernel: [508589.177982] R10: 00000000000045e7 R11: 0000000000000246 R12: 0000000000000001
Oct 10 16:35:09 pve5 kernel: [508589.177985] R13: 0000559c30e6cef0 R14: 0000559c33dfe240 R15: 00007ffe0096ead8
Oct 10 16:35:09 pve5 kernel: [508589.177989]  </TASK>
Oct 10 16:35:09 pve5 pvedaemon[1182757]: VM 100 creating disks failed
Oct 10 16:35:09 pve5 pvedaemon[1182757]: unable to create image: 'storage-mail_srv'-locked command timed out - aborting
 
Last edited:
I disabled, deleted and readded the NFS storage via WebUI. I tried an image creation, but the result was the same with blocked PIDs:

Code:
Oct 10 17:12:03 pve5 pvedaemon[3670]: <root@pam> update VM 100: -scsi0 mail_srv:5,format=raw,backup=0
Oct 10 17:12:03 pve5 pvedaemon[3670]: <root@pam> starting task UPID:pve5:001226BD:030B6A94:63443643:qmconfig:100:root@pam:
Oct 10 17:13:20 pve5 pvedaemon[3670]: <root@pam> successful auth for user 'root@pam'
Oct 10 17:13:42 pve5 pvedaemon[3670]: worker exit
Oct 10 17:13:42 pve5 pvedaemon[3669]: worker 3670 finished
Oct 10 17:13:42 pve5 pvedaemon[3669]: starting 1 worker(s)
Oct 10 17:13:42 pve5 pvedaemon[3669]: worker 1189818 started
Oct 10 17:14:15 pve5 postfix/smtpd[1189931]: connect from localhost[127.0.0.1]
Oct 10 17:14:15 pve5 postfix/smtpd[1189931]: disconnect from localhost[127.0.0.1] helo=1 quit=1 commands=2
Oct 10 17:15:25 pve5 kernel: [511005.843002] INFO: task kworker/u113:3:1188176 blocked for more than 120 seconds.
Oct 10 17:15:25 pve5 kernel: [511005.843074]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 17:15:25 pve5 kernel: [511005.843118] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 17:15:25 pve5 kernel: [511005.843172] task:kworker/u113:3  state:D stack:    0 pid:1188176 ppid:     2 flags:0x00004000
Oct 10 17:15:25 pve5 kernel: [511005.843181] Workqueue: events_unbound io_ring_exit_work
Oct 10 17:15:25 pve5 kernel: [511005.843195] Call Trace:
Oct 10 17:15:25 pve5 kernel: [511005.843198]  <TASK>
Oct 10 17:15:25 pve5 kernel: [511005.843203]  __schedule+0x33d/0x1750
Oct 10 17:15:25 pve5 kernel: [511005.843212]  ? enqueue_entity+0x17d/0x760
Oct 10 17:15:25 pve5 kernel: [511005.843222]  ? enqueue_task_fair+0x189/0x6a0
Oct 10 17:15:25 pve5 kernel: [511005.843228]  schedule+0x4e/0xc0
Oct 10 17:15:25 pve5 kernel: [511005.843232]  schedule_timeout+0x103/0x140
Oct 10 17:15:25 pve5 kernel: [511005.843240]  ? ttwu_do_activate+0x72/0xf0
Oct 10 17:15:25 pve5 kernel: [511005.843250]  __wait_for_common+0xae/0x150
Oct 10 17:15:25 pve5 kernel: [511005.843254]  ? usleep_range_state+0x90/0x90
Oct 10 17:15:25 pve5 kernel: [511005.843260]  wait_for_completion+0x24/0x30
Oct 10 17:15:25 pve5 kernel: [511005.843264]  io_ring_exit_work+0x194/0x6f0
Oct 10 17:15:25 pve5 kernel: [511005.843269]  ? __schedule+0x345/0x1750
Oct 10 17:15:25 pve5 kernel: [511005.843273]  ? wb_update_bandwidth+0x4f/0x70
Oct 10 17:15:25 pve5 kernel: [511005.843281]  ? io_uring_del_tctx_node+0xc0/0xc0
Oct 10 17:15:25 pve5 kernel: [511005.843291]  process_one_work+0x22b/0x3d0
Oct 10 17:15:25 pve5 kernel: [511005.843298]  worker_thread+0x53/0x420
Oct 10 17:15:25 pve5 kernel: [511005.843303]  ? process_one_work+0x3d0/0x3d0
Oct 10 17:15:25 pve5 kernel: [511005.843307]  kthread+0x12a/0x150
Oct 10 17:15:25 pve5 kernel: [511005.843316]  ? set_kthread_struct+0x50/0x50
Oct 10 17:15:25 pve5 kernel: [511005.843322]  ret_from_fork+0x22/0x30
Oct 10 17:15:25 pve5 kernel: [511005.843333]  </TASK>
Oct 10 17:15:25 pve5 kernel: [511005.843336] INFO: task kworker/u113:2:1189148 blocked for more than 120 seconds.
Oct 10 17:15:25 pve5 kernel: [511005.843390]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 17:15:25 pve5 kernel: [511005.843432] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 17:15:25 pve5 kernel: [511005.843485] task:kworker/u113:2  state:D stack:    0 pid:1189148 ppid:     2 flags:0x00004000
Oct 10 17:15:25 pve5 kernel: [511005.843491] Workqueue: events_unbound io_ring_exit_work
Oct 10 17:15:25 pve5 kernel: [511005.843497] Call Trace:
Oct 10 17:15:25 pve5 kernel: [511005.843499]  <TASK>
Oct 10 17:15:25 pve5 kernel: [511005.843501]  __schedule+0x33d/0x1750
Oct 10 17:15:25 pve5 kernel: [511005.843505]  ? lock_timer_base+0x3b/0xd0
Oct 10 17:15:25 pve5 kernel: [511005.843516]  ? lock_timer_base+0x3b/0xd0
Oct 10 17:15:25 pve5 kernel: [511005.843522]  ? __mod_timer+0x271/0x440
Oct 10 17:15:25 pve5 kernel: [511005.843526]  schedule+0x4e/0xc0
Oct 10 17:15:25 pve5 kernel: [511005.843530]  schedule_timeout+0x103/0x140
Oct 10 17:15:25 pve5 kernel: [511005.843535]  ? __bpf_trace_tick_stop+0x20/0x20
Oct 10 17:15:25 pve5 kernel: [511005.843543]  __wait_for_common+0xae/0x150
Oct 10 17:15:25 pve5 kernel: [511005.843547]  ? usleep_range_state+0x90/0x90
Oct 10 17:15:25 pve5 kernel: [511005.843553]  wait_for_completion+0x24/0x30
Oct 10 17:15:25 pve5 kernel: [511005.843557]  io_ring_exit_work+0x194/0x6f0
Oct 10 17:15:25 pve5 kernel: [511005.843562]  ? __schedule+0x345/0x1750
Oct 10 17:15:25 pve5 kernel: [511005.843566]  ? io_uring_del_tctx_node+0xc0/0xc0
Oct 10 17:15:25 pve5 kernel: [511005.843575]  process_one_work+0x22b/0x3d0
Oct 10 17:15:25 pve5 kernel: [511005.843580]  worker_thread+0x53/0x420
Oct 10 17:15:25 pve5 kernel: [511005.843584]  ? process_one_work+0x3d0/0x3d0
Oct 10 17:15:25 pve5 kernel: [511005.843589]  kthread+0x12a/0x150
Oct 10 17:15:25 pve5 kernel: [511005.843596]  ? set_kthread_struct+0x50/0x50
Oct 10 17:15:25 pve5 kernel: [511005.843603]  ret_from_fork+0x22/0x30
Oct 10 17:15:25 pve5 kernel: [511005.843611]  </TASK>
Oct 10 17:15:25 pve5 kernel: [511005.843613] INFO: task task UPID:pve5::1189565 blocked for more than 120 seconds.
Oct 10 17:15:25 pve5 kernel: [511005.843666]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 17:15:25 pve5 kernel: [511005.843707] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 17:15:25 pve5 kernel: [511005.843759] task:task UPID:pve5: state:D stack:    0 pid:1189565 ppid:     1 flags:0x00004002
Oct 10 17:15:25 pve5 kernel: [511005.843764] Call Trace:
Oct 10 17:15:25 pve5 kernel: [511005.843766]  <TASK>
Oct 10 17:15:25 pve5 kernel: [511005.843768]  __schedule+0x33d/0x1750
Oct 10 17:15:25 pve5 kernel: [511005.843775]  ? nfs4_have_delegation+0x28/0x70 [nfsv4]
Oct 10 17:15:25 pve5 kernel: [511005.843870]  ? nfs_attribute_cache_expired+0x33/0x90 [nfs]
Oct 10 17:15:25 pve5 kernel: [511005.843918]  ? nfs4_have_delegation+0x28/0x70 [nfsv4]
Oct 10 17:15:25 pve5 kernel: [511005.843970]  ? __nfs_revalidate_inode+0x2a2/0x340 [nfs]
Oct 10 17:15:25 pve5 kernel: [511005.844004]  schedule+0x4e/0xc0
Oct 10 17:15:25 pve5 kernel: [511005.844008]  io_schedule+0x46/0x80
Oct 10 17:15:25 pve5 kernel: [511005.844012]  wait_on_page_bit_common+0x114/0x3e0
Oct 10 17:15:25 pve5 kernel: [511005.844021]  ? filemap_invalidate_unlock_two+0x50/0x50
Oct 10 17:15:25 pve5 kernel: [511005.844028]  wait_on_page_bit+0x3f/0x50
Oct 10 17:15:25 pve5 kernel: [511005.844034]  wait_on_page_writeback+0x26/0x80
Oct 10 17:15:25 pve5 kernel: [511005.844039]  __filemap_fdatawait_range+0x97/0x120
Oct 10 17:15:25 pve5 kernel: [511005.844045]  ? __do_compat_sys_newfstatat+0x61/0x70
Oct 10 17:15:25 pve5 kernel: [511005.844051]  ? path_lookupat+0xae/0x1c0
Oct 10 17:15:25 pve5 kernel: [511005.844059]  ? filename_lookup+0xcb/0x1d0
Oct 10 17:15:25 pve5 kernel: [511005.844063]  filemap_write_and_wait_range+0x88/0xe0
Oct 10 17:15:25 pve5 kernel: [511005.844070]  nfs_getattr+0x407/0x420 [nfs]
Oct 10 17:15:25 pve5 kernel: [511005.844103]  vfs_getattr_nosec+0xbd/0xe0
Oct 10 17:15:25 pve5 kernel: [511005.844107]  vfs_statx+0x9d/0x130
Oct 10 17:15:25 pve5 kernel: [511005.844112]  __do_sys_newlstat+0x3e/0x80
Oct 10 17:15:25 pve5 kernel: [511005.844118]  __x64_sys_newlstat+0x16/0x20
Oct 10 17:15:25 pve5 kernel: [511005.844122]  do_syscall_64+0x5c/0xc0
Oct 10 17:15:25 pve5 kernel: [511005.844126]  ? handle_mm_fault+0xd8/0x2c0
Oct 10 17:15:25 pve5 kernel: [511005.844134]  ? exit_to_user_mode_prepare+0x37/0x1b0
Oct 10 17:15:25 pve5 kernel: [511005.844140]  ? irqentry_exit_to_user_mode+0x9/0x20
Oct 10 17:15:25 pve5 kernel: [511005.844146]  ? irqentry_exit+0x1d/0x30
Oct 10 17:15:25 pve5 kernel: [511005.844151]  ? exc_page_fault+0x89/0x170
Oct 10 17:15:25 pve5 kernel: [511005.844155]  entry_SYSCALL_64_after_hwframe+0x61/0xcb
Oct 10 17:15:25 pve5 kernel: [511005.844162] RIP: 0033:0x7fc57359de06
Oct 10 17:15:25 pve5 kernel: [511005.844167] RSP: 002b:00007ffe0096eaa8 EFLAGS: 00000246 ORIG_RAX: 0000000000000006
Oct 10 17:15:25 pve5 kernel: [511005.844172] RAX: ffffffffffffffda RBX: 0000559c33dd1ae8 RCX: 00007fc57359de06
Oct 10 17:15:25 pve5 kernel: [511005.844175] RDX: 00007ffe0096eae0 RSI: 00007ffe0096eae0 RDI: 0000559c334a1530
Oct 10 17:15:25 pve5 kernel: [511005.844178] RBP: 0000559c2cacf2a0 R08: 0000000000000001 R09: 0000000000000111
Oct 10 17:15:25 pve5 kernel: [511005.844180] R10: 00000000000045e7 R11: 0000000000000246 R12: 0000000000000001
Oct 10 17:15:25 pve5 kernel: [511005.844183] R13: 0000559c334a1530 R14: 0000559c33dd1af0 R15: 00007ffe0096ead8
Oct 10 17:15:25 pve5 kernel: [511005.844187]  </TASK>
Oct 10 17:15:30 pve5 nfsidmap[1190104]: nss_getpwnam: name 'root@open' does not map into domain 'localdomain'
Oct 10 17:17:01 pve5 CRON[1190363]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
Oct 10 17:17:26 pve5 kernel: [511126.676333] INFO: task kworker/u113:3:1188176 blocked for more than 241 seconds.
Oct 10 17:17:26 pve5 kernel: [511126.676466]       Tainted: P           O      5.15.53-1-pve #1
Oct 10 17:17:26 pve5 kernel: [511126.676517] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Oct 10 17:17:26 pve5 kernel: [511126.676571] task:kworker/u113:3  state:D stack:    0 pid:1188176 ppid:     2 flags:0x00004000
Oct 10 17:17:26 pve5 kernel: [511126.676580] Workqueue: events_unbound io_ring_exit_work
Oct 10 17:17:26 pve5 kernel: [511126.676593] Call Trace:
Oct 10 17:17:26 pve5 kernel: [511126.676597]  <TASK>
Oct 10 17:17:26 pve5 kernel: [511126.676602]  __schedule+0x33d/0x1750
Oct 10 17:17:26 pve5 kernel: [511126.676610]  ? enqueue_entity+0x17d/0x760
Oct 10 17:17:26 pve5 kernel: [511126.676619]  ? enqueue_task_fair+0x189/0x6a0
Oct 10 17:17:26 pve5 kernel: [511126.676626]  schedule+0x4e/0xc0
Oct 10 17:17:26 pve5 kernel: [511126.676630]  schedule_timeout+0x103/0x140
Oct 10 17:17:26 pve5 kernel: [511126.676636]  ? ttwu_do_activate+0x72/0xf0
Oct 10 17:17:26 pve5 kernel: [511126.676644]  __wait_for_common+0xae/0x150
Oct 10 17:17:26 pve5 kernel: [511126.676649]  ? usleep_range_state+0x90/0x90
Oct 10 17:17:26 pve5 kernel: [511126.676655]  wait_for_completion+0x24/0x30
Oct 10 17:17:26 pve5 kernel: [511126.676659]  io_ring_exit_work+0x194/0x6f0
Oct 10 17:17:26 pve5 kernel: [511126.676664]  ? __schedule+0x345/0x1750
Oct 10 17:17:26 pve5 kernel: [511126.676668]  ? wb_update_bandwidth+0x4f/0x70
Oct 10 17:17:26 pve5 kernel: [511126.676675]  ? io_uring_del_tctx_node+0xc0/0xc0
Oct 10 17:17:26 pve5 kernel: [511126.676685]  process_one_work+0x22b/0x3d0
Oct 10 17:17:26 pve5 kernel: [511126.676691]  worker_thread+0x53/0x420
Oct 10 17:17:26 pve5 kernel: [511126.676696]  ? process_one_work+0x3d0/0x3d0
Oct 10 17:17:26 pve5 kernel: [511126.676700]  kthread+0x12a/0x150
Oct 10 17:17:26 pve5 kernel: [511126.676708]  ? set_kthread_struct+0x50/0x50
Oct 10 17:17:26 pve5 kernel: [511126.676715]  ret_from_fork+0x22/0x30
Oct 10 17:17:26 pve5 kernel: [511126.676725]  </TASK>
Oct 10 17:18:05 pve5 pvedaemon[1189565]: VM 100 creating disks failed
Oct 10 17:18:05 pve5 pvedaemon[1189565]: unable to create image: 'storage-mail_srv'-locked command timed out - aborting
 
Hi,

I done some tests with NFS. There are issues about NFS mounts and NFS v4.2, I think.

When I add an NFS storage on WebUI, it's appear there and it shown by mount in cli. If I use defaults, it mounted as version 4.2, like:
Code:
192.168.90.202:/backups on /mnt/pve/backups type nfs4 (rw,relatime,vers=4.2,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.90.95,local_lock=none,addr=192.168.90.202)

If I disable it on WebUI, it will disappear from the Nodes' storage list. I can't add disks to it. The mount still up in cli. It's not a big deal now.

But if I remove this NFS Storage on WebUI totally, the cli mount remains up. Still mounted if I see it with cli df command. Little strange.

If I readd the NFS volume on WebUI with different version like 4.0, the cli mount still shows the version 4.2. The problem is still exist if I just change the version on WebUI and Save it. Disable / enable Storage can't help there either.

Another big issue, the main problem described earlier ("can't add new image") is exist with the version 4.2. If I mount the volume with version 4.0 (if I manually umount it from cli first, of course), I can add new disks on it.

These are kind of bugs or I do it wrong?

Thanks,
Attila
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!