Update to Proxmox v6, update-initramfs creates bad initrd image leading to maintenance mode

Oct 18, 2016
15
3
23
54
Hi folks,
I just upgraded my cluster to be able to integrate a new server that has directly been installed with v6.

It worked out fine but one problem that shows up on 2 of the 5 servers.

Here's a fstab that I will use to explain the problem:

/dev/pve/root / ext4 errors=remount-ro 0 1
/dev/pve/swap none swap sw 0 0
/dev/mapper/pve-data /var/lib/vz ext4 errors=remount-ro 0 0
proc /proc proc defaults 0 0


After the dist-upgrade two servers rebootet in maintenance mode and when logging in to the console, I could see that the local storage /var/lib/vz failed to mount.

And I can't mount it manualy to /var/lib/vz but I'm able to mount to /mnt.

When I remove the mount point for /var/lib/vz, both servers boot fine without problem. Right now I don't have much additional info, as I need the cluster to be online later on.

Luckily I'll have one server without productive VMs tomorrow, where I can test the failing setup and give some more infos.

Okay back with some more info. Here's an excerpt from journal -xb that show the problem:

Jul 24 10:47:28 proxmox3 systemd[1]: dev-pve-data.device: Job dev-pve-data.device/start timed out.
Jul 24 10:47:28 proxmox3 systemd[1]: Timed out waiting for device /dev/pve/data.
-- Subject: A start job for unit dev-pve-data.device has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit dev-pve-data.device has finished with a failure.
--
-- The job identifier is 22 and the job result is timeout.
Jul 24 10:47:28 proxmox3 systemd[1]: Dependency failed for /var/lib/vz.
-- Subject: A start job for unit var-lib-vz.mount has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit var-lib-vz.mount has finished with a failure.
--
-- The job identifier is 21 and the job result is dependency.
Jul 24 10:47:28 proxmox3 systemd[1]: Dependency failed for Local File Systems.
-- Subject: A start job for unit local-fs.target has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit local-fs.target has finished with a failure.
--
-- The job identifier is 20 and the job result is dependency.
Jul 24 10:47:28 proxmox3 systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.
Jul 24 10:47:28 proxmox3 systemd[1]: local-fs.target: Triggering OnFailure= dependencies.
Jul 24 10:47:28 proxmox3 systemd[1]: var-lib-vz.mount: Job var-lib-vz.mount/start failed with result 'dependency'.
Jul 24 10:47:28 proxmox3 systemd[1]: dev-pve-data.device: Job dev-pve-data.device/start failed with result 'timeout'


I'll attach the full log as well.

I found a similar error on the debian bug list, though it'S mentioned that it has been solved with udev.240-1, buster is at udev-241-5.



Regards

Marc
 
Last edited:
More info:

After having started the system in maintenance mode, mounted the failing lvm volume manually to /mnt, the blkid doesn't show a UUID for the partition:

Mount:

sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=10247252k,nr_inodes=2561813,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=2054624k,mode=755)
/dev/mapper/pve-root on / type ext4 (rw,relatime,errors=remount-ro)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=34,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=15100)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
rpool on /rpool type zfs (rw,xattr,noacl)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
/dev/mapper/pve-data on /mnt type ext4 (rw,relatime,stripe=16)


BlkId:
/dev/sda2: UUID="21C3-B525" TYPE="vfat" PARTUUID="3f92f168-d4ed-488c-a29a-e47ab3bcca9c"
/dev/sda3: UUID="j633We-FHo5-8u3i-if40-Sr2v-8J26-cF8WXw" TYPE="LVM2_member" PARTUUID="8f0ac3af-b6dc-4628-a07f-b18af82ad3fc"
/dev/sdb1: LABEL="rpool" UUID="3403759353542098693" UUID_SUB="5155262077077103203" TYPE="zfs_member" PARTLABEL="zfs-86330f618fc0a3ef" PARTUUID="19050c59-95dd-4840-8be5-5cdaf47e1e4a"
/dev/mapper/pve-swap: UUID="ea12a440-df57-47ac-a04b-76f1698514cc" TYPE="swap"
/dev/mapper/pve-root: UUID="817de2ea-7122-4a60-9062-6b704246264f" TYPE="ext4"
/dev/sda1: PARTUUID="99099756-1c11-4f20-9162-3a7bd7141c4d"
/dev/sdb9: PARTUUID="c818af80-610e-8843-b33c-2a44dcb02ab9"​


I'm missing a path to further anaylze the problem.

Regards

Marc
 
More Info:

I bootet into the old Proxmox 4.4 kernel 4.15.18-18-pve and it boots without a problem, so it seems the problem is related to the kernel from PRoxmox v6.
I've attached the journal.

After some more tests, I can say that it'S not the kernel, but the initrd.img.

I changed update_initramfs to all and recreated all initrds with update-initramfs -u. Now no kernel is able to boot correctly and all kernels end up in maintenance mode


Regards

Marc
 

Attachments

  • 4.18.journal.log
    165.2 KB · Views: 3
Last edited:
What is new is that problems got worse, the whole dist-upgrade lead to at least 2 servers tilting after some time with ZFS pool errors. The VM start to freeze and only a complete host reboot cycle brings the VMs back online.

The only host that seems to do fine is the new one that has been installed from a fresh Proxmox v6 ISO.

So I got hit by two problems:

- LVM problems
- ZFS problems

That doesn't feel like a well tested upgrade path.

The problem is I can't do a proper debug session as it'S more important to keep the stack running. I will reinstall one of the servers making problems after a dist-upgrade to see if a clean install solves the problem.


I've attached a dmesg excerpt from one server having ZFS problems.

Regards

Marc
 

Attachments

  • Proxmox6.log
    125.6 KB · Views: 2
for the LVM issues would require you to debug udev to see what goes wrong -
More Info:

I bootet into the old Proxmox 4.4 kernel 4.15.18-18-pve and it boots without a problem, so it seems the problem is related to the kernel from PRoxmox v6.
I've attached the journal.

After some more tests, I can say that it'S not the kernel, but the initrd.img.

I changed update_initramfs to all and recreated all initrds with update-initramfs -u. Now no kernel is able to boot correctly and all kernels end up in maintenance mode

that log file does not show any errors AFAICT?

What is new is that problems got worse, the whole dist-upgrade lead to at least 2 servers tilting after some time with ZFS pool errors. The VM start to freeze and only a complete host reboot cycle brings the VMs back online.

and that one only shows that your zvol tasks are waiting (which is usually a symptom of an overloaded system, but could also be a bug in the ZFS module).

in any case, for both issues we'd need more logs/info and a clear description of your setup.
 
I'm back with more problems :),

After running fine for 2 days, now the server with a fresh Proxmox v6 install has problems as well. This thread is about the same problem.

I had a stuck linux VM and even a triggered reboot on the host failed due to not being able to unmount LVM volumes, only a cold reset brought the server back to live.

What I could find is:

Code:
Jul 31 07:04:18 proxmox7 kernel: [400437.580055] INFO: task zvol:1197 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.580088]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.580110] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.580136] zvol            D    0  1197      2 0x80000000
Jul 31 07:04:18 proxmox7 kernel: [400437.580139] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.580147]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.580150]  ? __schedule+0x2dc/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.580152]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580153]  rwsem_down_write_failed+0x160/0x340
Jul 31 07:04:18 proxmox7 kernel: [400437.580155]  ? schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580156]  ? rwsem_down_read_failed+0xe6/0x170
Jul 31 07:04:18 proxmox7 kernel: [400437.580157]  ? mutex_lock+0x12/0x30
Jul 31 07:04:18 proxmox7 kernel: [400437.580159]  call_rwsem_down_write_failed+0x17/0x30
Jul 31 07:04:18 proxmox7 kernel: [400437.580166]  ? spl_kmem_free+0x33/0x40 [spl]
Jul 31 07:04:18 proxmox7 kernel: [400437.580167]  down_write+0x2d/0x40
Jul 31 07:04:18 proxmox7 kernel: [400437.580210]  dmu_zfetch+0x134/0x590 [zfs]
Jul 31 07:04:18 proxmox7 kernel: [400437.580240]  dmu_buf_hold_array_by_dnode+0x379/0x450 [zfs]
Jul 31 07:04:18 proxmox7 kernel: [400437.580271]  dmu_write_uio_dnode+0x4c/0x140 [zfs]
Jul 31 07:04:18 proxmox7 kernel: [400437.580312]  zvol_write+0x190/0x620 [zfs]
Jul 31 07:04:18 proxmox7 kernel: [400437.580318]  taskq_thread+0x2ec/0x4d0 [spl]
Jul 31 07:04:18 proxmox7 kernel: [400437.580323]  ? wake_up_q+0x80/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.580327]  kthread+0x120/0x140
Jul 31 07:04:18 proxmox7 kernel: [400437.580332]  ? task_done+0xb0/0xb0 [spl]
Jul 31 07:04:18 proxmox7 kernel: [400437.580333]  ? __kthread_parkme+0x70/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580335]  ret_from_fork+0x22/0x40
Jul 31 07:04:18 proxmox7 kernel: [400437.580377] INFO: task txg_quiesce:2038 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.580402]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.580422] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.580448] txg_quiesce     D    0  2038      2 0x80000000
Jul 31 07:04:18 proxmox7 kernel: [400437.580450] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.580452]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.580454]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580458]  cv_wait_common+0x104/0x130 [spl]
Jul 31 07:04:18 proxmox7 kernel: [400437.580460]  ? wait_woken+0x80/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.580464]  __cv_wait+0x15/0x20 [spl]
Jul 31 07:04:18 proxmox7 kernel: [400437.580506]  txg_quiesce_thread+0x2ac/0x3a0 [zfs]
Jul 31 07:04:18 proxmox7 kernel: [400437.580548]  ? txg_sync_thread+0x4c0/0x4c0 [zfs]
Jul 31 07:04:18 proxmox7 kernel: [400437.580553]  thread_generic_wrapper+0x74/0x90 [spl]
Jul 31 07:04:18 proxmox7 kernel: [400437.580554]  kthread+0x120/0x140
Jul 31 07:04:18 proxmox7 kernel: [400437.580559]  ? __thread_exit+0x20/0x20 [spl]
Jul 31 07:04:18 proxmox7 kernel: [400437.580561]  ? __kthread_parkme+0x70/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580562]  ret_from_fork+0x22/0x40
Jul 31 07:04:18 proxmox7 kernel: [400437.580589] INFO: task kvm:10497 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.580611]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.580632] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.580658] kvm             D    0 10497      1 0x00000000
Jul 31 07:04:18 proxmox7 kernel: [400437.580659] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.580662]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.580663]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580665]  io_schedule+0x16/0x40
Jul 31 07:04:18 proxmox7 kernel: [400437.580668]  wait_on_page_bit_common+0x14f/0x350
Jul 31 07:04:18 proxmox7 kernel: [400437.580670]  ? file_check_and_advance_wb_err+0xe0/0xe0
Jul 31 07:04:18 proxmox7 kernel: [400437.580671]  __filemap_fdatawait_range+0x104/0x160
Jul 31 07:04:18 proxmox7 kernel: [400437.580674]  ? __filemap_fdatawrite_range+0xd1/0x100
Jul 31 07:04:18 proxmox7 kernel: [400437.580675]  file_write_and_wait_range+0x86/0xb0
Jul 31 07:04:18 proxmox7 kernel: [400437.580678]  blkdev_fsync+0x1b/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.580681]  vfs_fsync_range+0x48/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.580682]  ? __fget_light+0x54/0x60
Jul 31 07:04:18 proxmox7 kernel: [400437.580683]  do_fsync+0x3d/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580685]  __x64_sys_fdatasync+0x17/0x20
Jul 31 07:04:18 proxmox7 kernel: [400437.580688]  do_syscall_64+0x5a/0x110
Jul 31 07:04:18 proxmox7 kernel: [400437.580689]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 31 07:04:18 proxmox7 kernel: [400437.580691] RIP: 0033:0x7f0d222702e7
Jul 31 07:04:18 proxmox7 kernel: [400437.580696] Code: Bad RIP value.
Jul 31 07:04:18 proxmox7 kernel: [400437.580697] RSP: 002b:00007f0909278780 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
Jul 31 07:04:18 proxmox7 kernel: [400437.580699] RAX: ffffffffffffffda RBX: 000000000000001b RCX: 00007f0d222702e7
Jul 31 07:04:18 proxmox7 kernel: [400437.580700] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000000001b
Jul 31 07:04:18 proxmox7 kernel: [400437.580700] RBP: 00007f0d1546ca10 R08: 0000000000000000 R09: 00000000ffffffff
Jul 31 07:04:18 proxmox7 kernel: [400437.580701] R10: 00007f0909278760 R11: 0000000000000293 R12: 000055ebbedbcef2
Jul 31 07:04:18 proxmox7 kernel: [400437.580701] R13: 00007f0d1546ca78 R14: 00007f0d155ab7e0 R15: 00007f0d15527510
Jul 31 07:04:18 proxmox7 kernel: [400437.580704] INFO: task kvm:35231 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.580726]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.580746] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.580772] kvm             D    0 35231      1 0x00000000
Jul 31 07:04:18 proxmox7 kernel: [400437.580773] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.580775]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.580777]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580779]  io_schedule+0x16/0x40
Jul 31 07:04:18 proxmox7 kernel: [400437.580780]  wait_on_page_bit_common+0x14f/0x350
Jul 31 07:04:18 proxmox7 kernel: [400437.580782]  ? file_check_and_advance_wb_err+0xe0/0xe0
Jul 31 07:04:18 proxmox7 kernel: [400437.580784]  __filemap_fdatawait_range+0x104/0x160
Jul 31 07:04:18 proxmox7 kernel: [400437.580786]  file_write_and_wait_range+0x86/0xb0
Jul 31 07:04:18 proxmox7 kernel: [400437.580787]  blkdev_fsync+0x1b/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.580789]  vfs_fsync_range+0x48/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.580790]  ? __fget_light+0x54/0x60
Jul 31 07:04:18 proxmox7 kernel: [400437.580791]  do_fsync+0x3d/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580792]  __x64_sys_fdatasync+0x17/0x20
Jul 31 07:04:18 proxmox7 kernel: [400437.580793]  do_syscall_64+0x5a/0x110
Jul 31 07:04:18 proxmox7 kernel: [400437.580795]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 31 07:04:18 proxmox7 kernel: [400437.580796] RIP: 0033:0x7f0d222702e7
Jul 31 07:04:18 proxmox7 kernel: [400437.580797] Code: Bad RIP value.
Jul 31 07:04:18 proxmox7 kernel: [400437.580798] RSP: 002b:00007f090726d780 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
Jul 31 07:04:18 proxmox7 kernel: [400437.580799] RAX: ffffffffffffffda RBX: 000000000000000e RCX: 00007f0d222702e7
Jul 31 07:04:18 proxmox7 kernel: [400437.580800] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000000000e
Jul 31 07:04:18 proxmox7 kernel: [400437.580800] RBP: 00007f0d1546ca10 R08: 0000000000000000 R09: 00000000ffffffff
Jul 31 07:04:18 proxmox7 kernel: [400437.580801] R10: 00007f090726d760 R11: 0000000000000293 R12: 000055ebbedbcef2
Jul 31 07:04:18 proxmox7 kernel: [400437.580801] R13: 00007f0d1546ca78 R14: 00007f0d155aba80 R15: 00007f08f7602010
Jul 31 07:04:18 proxmox7 kernel: [400437.580814] INFO: task kvm:34900 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.580835]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.580855] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.580881] kvm             D    0 34900      1 0x00000000
Jul 31 07:04:18 proxmox7 kernel: [400437.580882] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.580884]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.580886]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580887]  io_schedule+0x16/0x40
Jul 31 07:04:18 proxmox7 kernel: [400437.580889]  wait_on_page_bit_common+0x14f/0x350
Jul 31 07:04:18 proxmox7 kernel: [400437.580890]  ? file_check_and_advance_wb_err+0xe0/0xe0
Jul 31 07:04:18 proxmox7 kernel: [400437.580892]  __filemap_fdatawait_range+0x104/0x160
Jul 31 07:04:18 proxmox7 kernel: [400437.580894]  file_write_and_wait_range+0x86/0xb0
Jul 31 07:04:18 proxmox7 kernel: [400437.580895]  blkdev_fsync+0x1b/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.580896]  vfs_fsync_range+0x48/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.580897]  ? __fget_light+0x54/0x60
Jul 31 07:04:18 proxmox7 kernel: [400437.580899]  do_fsync+0x3d/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580900]  __x64_sys_fdatasync+0x17/0x20
Jul 31 07:04:18 proxmox7 kernel: [400437.580901]  do_syscall_64+0x5a/0x110
Jul 31 07:04:18 proxmox7 kernel: [400437.580902]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 31 07:04:18 proxmox7 kernel: [400437.580903] RIP: 0033:0x7f0c371452e7
Jul 31 07:04:18 proxmox7 kernel: [400437.580905] Code: Bad RIP value.
Jul 31 07:04:18 proxmox7 kernel: [400437.580906] RSP: 002b:00007f0a148fa780 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
Jul 31 07:04:18 proxmox7 kernel: [400437.580907] RAX: ffffffffffffffda RBX: 0000000000000024 RCX: 00007f0c371452e7
Jul 31 07:04:18 proxmox7 kernel: [400437.580907] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000024
Jul 31 07:04:18 proxmox7 kernel: [400437.580908] RBP: 00007f0c2a46cbd0 R08: 0000000000000000 R09: 00000000ffffffff
Jul 31 07:04:18 proxmox7 kernel: [400437.580908] R10: 00007f0a148fa760 R11: 0000000000000293 R12: 000055b998ec3ef2
Jul 31 07:04:18 proxmox7 kernel: [400437.580909] R13: 00007f0c2a46cc38 R14: 00007f0c2a5abaf0 R15: 00007f0a09602010
Jul 31 07:04:18 proxmox7 kernel: [400437.580914] INFO: task kvm:15399 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.580936]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.580956] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.580982] kvm             D    0 15399      1 0x00000000
Jul 31 07:04:18 proxmox7 kernel: [400437.580983] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.580985]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.580987]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.580988]  io_schedule+0x16/0x40
Jul 31 07:04:18 proxmox7 kernel: [400437.580990]  wait_on_page_bit_common+0x14f/0x350
Jul 31 07:04:18 proxmox7 kernel: [400437.580991]  ? file_check_and_advance_wb_err+0xe0/0xe0
Jul 31 07:04:18 proxmox7 kernel: [400437.580993]  __filemap_fdatawait_range+0x104/0x160
Jul 31 07:04:18 proxmox7 kernel: [400437.580995]  file_write_and_wait_range+0x86/0xb0
Jul 31 07:04:18 proxmox7 kernel: [400437.580996]  blkdev_fsync+0x1b/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.580998]  vfs_fsync_range+0x48/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.580999]  ? __fget_light+0x54/0x60
Jul 31 07:04:18 proxmox7 kernel: [400437.581000]  do_fsync+0x3d/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.581001]  __x64_sys_fdatasync+0x17/0x20
Jul 31 07:04:18 proxmox7 kernel: [400437.581002]  do_syscall_64+0x5a/0x110
Jul 31 07:04:18 proxmox7 kernel: [400437.581004]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 31 07:04:18 proxmox7 kernel: [400437.581004] RIP: 0033:0x7f102ed402e7
Jul 31 07:04:18 proxmox7 kernel: [400437.581006] Code: Bad RIP value.
Jul 31 07:04:18 proxmox7 kernel: [400437.581007] RSP: 002b:00007f0e103fa780 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
Jul 31 07:04:18 proxmox7 kernel: [400437.581008] RAX: ffffffffffffffda RBX: 000000000000000e RCX: 00007f102ed402e7
Jul 31 07:04:18 proxmox7 kernel: [400437.581008] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000000000000000e
Jul 31 07:04:18 proxmox7 kernel: [400437.581009] RBP: 00007f102206ca10 R08: 0000000000000000 R09: 00000000ffffffff
Jul 31 07:04:18 proxmox7 kernel: [400437.581009] R10: 00007f0e103fa760 R11: 0000000000000293 R12: 000055de2e301ef2
Jul 31 07:04:18 proxmox7 kernel: [400437.581010] R13: 00007f102206ca78 R14: 00007f10221aba10 R15: 00007f0e0ae02010
Jul 31 07:04:18 proxmox7 kernel: [400437.581017] INFO: task kvm:35248 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.581039]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.581059] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.581085] kvm             D    0 35248      1 0x00000000
Jul 31 07:04:18 proxmox7 kernel: [400437.581086] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.581088]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.581090]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.581091]  schedule_timeout+0x258/0x360
Jul 31 07:04:18 proxmox7 kernel: [400437.581132]  ? zvol_request+0x30b/0x380 [zfs]
Jul 31 07:04:18 proxmox7 kernel: [400437.581134]  io_schedule_timeout+0x1e/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.581135]  wait_for_completion_io+0xb7/0x140
Jul 31 07:04:18 proxmox7 kernel: [400437.581137]  ? wake_up_q+0x80/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.581139]  submit_bio_wait+0x61/0x90
Jul 31 07:04:18 proxmox7 kernel: [400437.581143]  blkdev_issue_zeroout+0x142/0x220
Jul 31 07:04:18 proxmox7 kernel: [400437.581145]  blkdev_ioctl+0x5cd/0x9f0
Jul 31 07:04:18 proxmox7 kernel: [400437.581147]  block_ioctl+0x3d/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.581149]  do_vfs_ioctl+0xa9/0x640
Jul 31 07:04:18 proxmox7 kernel: [400437.581151]  ? handle_mm_fault+0xe1/0x210
Jul 31 07:04:18 proxmox7 kernel: [400437.581153]  ksys_ioctl+0x67/0x90
Jul 31 07:04:18 proxmox7 kernel: [400437.581155]  __x64_sys_ioctl+0x1a/0x20
Jul 31 07:04:18 proxmox7 kernel: [400437.581156]  do_syscall_64+0x5a/0x110
Jul 31 07:04:18 proxmox7 kernel: [400437.581158]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 31 07:04:18 proxmox7 kernel: [400437.581158] RIP: 0033:0x7fcde44ff427
Jul 31 07:04:18 proxmox7 kernel: [400437.581160] Code: Bad RIP value.
Jul 31 07:04:18 proxmox7 kernel: [400437.581161] RSP: 002b:00007fccc50ea728 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jul 31 07:04:18 proxmox7 kernel: [400437.581162] RAX: ffffffffffffffda RBX: 00007fccca947860 RCX: 00007fcde44ff427
Jul 31 07:04:18 proxmox7 kernel: [400437.581162] RDX: 00007fccc50ea740 RSI: 000000000000127f RDI: 0000000000000018
Jul 31 07:04:18 proxmox7 kernel: [400437.581163] RBP: 0000000000002000 R08: 0000000000000000 R09: 00007ffc01d07080
Jul 31 07:04:18 proxmox7 kernel: [400437.581164] R10: 000000000af5a364 R11: 0000000000000246 R12: 00007fcdd7842760
Jul 31 07:04:18 proxmox7 kernel: [400437.581164] R13: 00007fccc50ea740 R14: 00007fcdd79ab5b0 R15: 00007fcdd7972490
Jul 31 07:04:18 proxmox7 kernel: [400437.581166] INFO: task kvm:35249 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.581188]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.581208] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.581233] kvm             D    0 35249      1 0x00000000
Jul 31 07:04:18 proxmox7 kernel: [400437.581234] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.581236]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.581238]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.581239]  schedule_timeout+0x258/0x360
Jul 31 07:04:18 proxmox7 kernel: [400437.581279]  ? zvol_request+0x30b/0x380 [zfs]
Jul 31 07:04:18 proxmox7 kernel: [400437.581281]  io_schedule_timeout+0x1e/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.581283]  wait_for_completion_io+0xb7/0x140
Jul 31 07:04:18 proxmox7 kernel: [400437.581284]  ? wake_up_q+0x80/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.581286]  submit_bio_wait+0x61/0x90
Jul 31 07:04:18 proxmox7 kernel: [400437.581287]  blkdev_issue_zeroout+0x142/0x220
Jul 31 07:04:18 proxmox7 kernel: [400437.581289]  blkdev_ioctl+0x5cd/0x9f0
Jul 31 07:04:18 proxmox7 kernel: [400437.581291]  block_ioctl+0x3d/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.581292]  do_vfs_ioctl+0xa9/0x640
Jul 31 07:04:18 proxmox7 kernel: [400437.581293]  ? handle_mm_fault+0xe1/0x210
Jul 31 07:04:18 proxmox7 kernel: [400437.581295]  ksys_ioctl+0x67/0x90
Jul 31 07:04:18 proxmox7 kernel: [400437.581296]  __x64_sys_ioctl+0x1a/0x20
Jul 31 07:04:18 proxmox7 kernel: [400437.581297]  do_syscall_64+0x5a/0x110
Jul 31 07:04:18 proxmox7 kernel: [400437.581299]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 31 07:04:18 proxmox7 kernel: [400437.581300] RIP: 0033:0x7fcde44ff427
Jul 31 07:04:18 proxmox7 kernel: [400437.581301] Code: Bad RIP value.
Jul 31 07:04:18 proxmox7 kernel: [400437.581302] RSP: 002b:00007fccc48e9728 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
Jul 31 07:04:18 proxmox7 kernel: [400437.581303] RAX: ffffffffffffffda RBX: 00007fccc9e3c860 RCX: 00007fcde44ff427
Jul 31 07:04:18 proxmox7 kernel: [400437.581303] RDX: 00007fccc48e9740 RSI: 000000000000127f RDI: 0000000000000018
Jul 31 07:04:18 proxmox7 kernel: [400437.581304] RBP: 0000000000002000 R08: 0000000000000000 R09: 00007ffc01d07080
Jul 31 07:04:18 proxmox7 kernel: [400437.581304] R10: 000000000af5a364 R11: 0000000000000246 R12: 00007fcdd7842760
Jul 31 07:04:18 proxmox7 kernel: [400437.581305] R13: 00007fccc48e9740 R14: 00007fcdd79ab540 R15: 00007fcdd7972c10
Jul 31 07:04:18 proxmox7 kernel: [400437.581317] INFO: task kvm:9399 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.581338]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.581358] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.581383] kvm             D    0  9399      1 0x00000000
Jul 31 07:04:18 proxmox7 kernel: [400437.581384] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.581387]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.581389]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.581390]  io_schedule+0x16/0x40
Jul 31 07:04:18 proxmox7 kernel: [400437.581391]  wait_on_page_bit_common+0x14f/0x350
Jul 31 07:04:18 proxmox7 kernel: [400437.581393]  ? file_check_and_advance_wb_err+0xe0/0xe0
Jul 31 07:04:18 proxmox7 kernel: [400437.581394]  __filemap_fdatawait_range+0x104/0x160
Jul 31 07:04:18 proxmox7 kernel: [400437.581397]  ? wbc_attach_and_unlock_inode+0x10a/0x130
Jul 31 07:04:18 proxmox7 kernel: [400437.581399]  ? __filemap_fdatawrite_range+0xd1/0x100
Jul 31 07:04:18 proxmox7 kernel: [400437.581400]  file_write_and_wait_range+0x86/0xb0
Jul 31 07:04:18 proxmox7 kernel: [400437.581402]  blkdev_fsync+0x1b/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.581403]  vfs_fsync_range+0x48/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.581404]  ? __fget_light+0x54/0x60
Jul 31 07:04:18 proxmox7 kernel: [400437.581405]  do_fsync+0x3d/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.581406]  __x64_sys_fdatasync+0x17/0x20
Jul 31 07:04:18 proxmox7 kernel: [400437.581408]  do_syscall_64+0x5a/0x110
Jul 31 07:04:18 proxmox7 kernel: [400437.581409]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 31 07:04:18 proxmox7 kernel: [400437.581410] RIP: 0033:0x7f35fa7d52e7
Jul 31 07:04:18 proxmox7 kernel: [400437.581411] Code: Bad RIP value.
Jul 31 07:04:18 proxmox7 kernel: [400437.581412] RSP: 002b:00007f33ce7fa780 EFLAGS: 00000293 ORIG_RAX: 000000000000004b
Jul 31 07:04:18 proxmox7 kernel: [400437.581413] RAX: ffffffffffffffda RBX: 0000000000000020 RCX: 00007f35fa7d52e7
Jul 31 07:04:18 proxmox7 kernel: [400437.581413] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000020
Jul 31 07:04:18 proxmox7 kernel: [400437.581414] RBP: 00007f35eda6cbd0 R08: 0000000000000000 R09: 00000000ffffffff
Jul 31 07:04:18 proxmox7 kernel: [400437.581414] R10: 00007f33ce7fa760 R11: 0000000000000293 R12: 0000559943645ef2
Jul 31 07:04:18 proxmox7 kernel: [400437.581415] R13: 00007f35eda6cc38 R14: 00007f35edbaba80 R15: 00007f33ccc02290
Jul 31 07:04:18 proxmox7 kernel: [400437.581417] INFO: task kvm:35247 blocked for more than 120 seconds.
Jul 31 07:04:18 proxmox7 kernel: [400437.581439]       Tainted: P        W  O      5.0.15-1-pve #1
Jul 31 07:04:18 proxmox7 kernel: [400437.581458] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Jul 31 07:04:18 proxmox7 kernel: [400437.581484] kvm             D    0 35247      1 0x00000000
Jul 31 07:04:18 proxmox7 kernel: [400437.581484] Call Trace:
Jul 31 07:04:18 proxmox7 kernel: [400437.581487]  __schedule+0x2d4/0x870
Jul 31 07:04:18 proxmox7 kernel: [400437.581489]  ? bit_wait_timeout+0xa0/0xa0
Jul 31 07:04:18 proxmox7 kernel: [400437.581490]  schedule+0x2c/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.581491]  io_schedule+0x16/0x40
Jul 31 07:04:18 proxmox7 kernel: [400437.581493]  bit_wait_io+0x11/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.581494]  __wait_on_bit+0x7b/0x90
Jul 31 07:04:18 proxmox7 kernel: [400437.581496]  out_of_line_wait_on_bit+0x90/0xb0
Jul 31 07:04:18 proxmox7 kernel: [400437.581498]  ? init_wait_var_entry+0x50/0x50
Jul 31 07:04:18 proxmox7 kernel: [400437.581500]  __block_write_begin_int+0x22c/0x5e0
Jul 31 07:04:18 proxmox7 kernel: [400437.581501]  ? check_disk_change+0x70/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.581502]  ? check_disk_change+0x70/0x70
Jul 31 07:04:18 proxmox7 kernel: [400437.581504]  block_write_begin+0x4d/0xf0
Jul 31 07:04:18 proxmox7 kernel: [400437.581505]  blkdev_write_begin+0x23/0x30
Jul 31 07:04:18 proxmox7 kernel: [400437.581506]  generic_perform_write+0xf2/0x1b0
Jul 31 07:04:18 proxmox7 kernel: [400437.581508]  __generic_file_write_iter+0x101/0x1f0
Jul 31 07:04:18 proxmox7 kernel: [400437.581510]  blkdev_write_iter+0xa0/0x120
Jul 31 07:04:18 proxmox7 kernel: [400437.581513]  ? clk_divider_bestdiv+0x130/0x440
Jul 31 07:04:18 proxmox7 kernel: [400437.581515]  do_iter_readv_writev+0x14c/0x1c0
Jul 31 07:04:18 proxmox7 kernel: [400437.581517]  do_iter_write+0x86/0x1a0
Jul 31 07:04:18 proxmox7 kernel: [400437.581518]  vfs_writev+0xa7/0x100
Jul 31 07:04:18 proxmox7 kernel: [400437.581520]  ? wake_up_q+0x80/0x80
Jul 31 07:04:18 proxmox7 kernel: [400437.581522]  ? _copy_from_user+0x3e/0x60
Jul 31 07:04:18 proxmox7 kernel: [400437.581524]  do_pwritev+0x8e/0xe0
Jul 31 07:04:18 proxmox7 kernel: [400437.581526]  __x64_sys_pwritev+0x21/0x30
Jul 31 07:04:18 proxmox7 kernel: [400437.581527]  do_syscall_64+0x5a/0x110
Jul 31 07:04:18 proxmox7 kernel: [400437.581528]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jul 31 07:04:18 proxmox7 kernel: [400437.581529] RIP: 0033:0x7f35fa7d469a
Jul 31 07:04:18 proxmox7 kernel: [400437.581530] Code: Bad RIP value.
Jul 31 07:04:18 proxmox7 kernel: [400437.581531] RSP: 002b:00007f33d2dfa720 EFLAGS: 00000246 ORIG_RAX: 0000000000000128
Jul 31 07:04:18 proxmox7 kernel: [400437.581532] RAX: ffffffffffffffda RBX: 0000000000000020 RCX: 00007f35fa7d469a
Jul 31 07:04:18 proxmox7 kernel: [400437.581532] RDX: 000000000000000a RSI: 00007f35edbc7138 RDI: 0000000000000020
Jul 31 07:04:18 proxmox7 kernel: [400437.581533] RBP: 00007f35edbc7138 R08: 0000000000000000 R09: 0000000000000000
Jul 31 07:04:18 proxmox7 kernel: [400437.581533] R10: 0000000ca8d7a000 R11: 0000000000000246 R12: 000000000000000a
Jul 31 07:04:18 proxmox7 kernel: [400437.581534] R13: 0000000ca8d7a000 R14: 00007f35edbabaf0 R15: 00007f33d0402010
Jul 31 07:05:00 proxmox7 systemd[1]: Starting Proxmox VE replication runner...
Jul 31 07:05:01 proxmox7 systemd[1]: pvesr.service: Succeeded.
Jul 31 07:05:01 proxmox7 systemd[1]: Started Proxmox VE replication runner.
Jul 31 07:06:00 proxmox7 systemd[1]: Starting Proxmox VE replication runner...
Jul 31 07:06:01 proxmox7 systemd[1]: pvesr.service: Succeeded.

Code:
root@proxmox7:~# pveversion -v
proxmox-ve: 6.0-2 (running kernel: 5.0.15-1-pve)
pve-manager: 6.0-4 (running version: 6.0-4/2a719255)
pve-kernel-5.0: 6.0-5
pve-kernel-helper: 6.0-5
pve-kernel-5.0.15-1-pve: 5.0.15-1
ceph-fuse: 12.2.11+dfsg1-2.1
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve1
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-2
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-5
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-61
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-4
pve-container: 3.0-3
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-5
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-3
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-5
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1

Code:
root@proxmox7:~# systemctl status pvestatd
● pvestatd.service - PVE Status Daemon
   Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-07-31 09:15:49 CEST; 30min ago
  Process: 14176 ExecStart=/usr/bin/pvestatd start (code=exited, status=0/SUCCESS)
 Main PID: 14194 (pvestatd)
    Tasks: 1 (limit: 19660)
   Memory: 145.5M
   CGroup: /system.slice/pvestatd.service
           └─14194 pvestatd                                                                                                                                                   

Jul 31 09:15:49 proxmox7 systemd[1]: Starting PVE Status Daemon...
Jul 31 09:15:49 proxmox7 pvestatd[14194]: starting server
Jul 31 09:15:49 proxmox7 systemd[1]: Started PVE Status Daemon.
Jul 31 09:16:02 proxmox7 pvestatd[14194]: storage 'HP-NAS' is not online

Before rebooting the whole server, trying to stop the VM I get:

Code:
Jul 31 09:02:06 proxmox7 qm[52460]: timeout waiting on systemd
Jul 31 09:02:06 proxmox7 qm[52459]: <root@pam> end task UPID:proxmox7:0000CCEC:026DCD89:5D413CE9:qmstart:725:root@pam: timeout waiting on systemd


MY setup:
  • Dell R7415 AMD EPYC 7551P 32-Core Processor
  • 320GB RAM
  • 6xIntel P4610 as ZFS pool in a Raid10 setup
  • 2x860 Pro SSD as Root system


I right now have the next VM being blocked, it's VM 703:

Code:
root@proxmox7:~# systemctl status qemu.slice
● qemu.slice
   Loaded: loaded
   Active: active since Wed 2019-07-31 09:15:54 CEST; 50min ago
    Tasks: 131
   Memory: 82.4G
   CGroup: /qemu.slice
           ├─701.scope
           │ └─14395 /usr/bin/kvm -id 701 -name service-authorit-w7x64 -chardev socket,id=qmp,path=/var/run/qemu-server/701.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,re
           ├─703.scope
           │ └─14772 /usr/bin/kvm -id 703 -name jira.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/703.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=
           ├─706.scope
           │ └─16384 /usr/bin/kvm -id 706 -name icets.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/706.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect
           ├─707.scope
           │ └─17869 /usr/bin/kvm -id 707 -name sdm-tomcat.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/707.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reco
           ├─708.scope
           │ └─23660 /usr/bin/kvm -id 708 -name build-ecco31amd64.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/708.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.so
           ├─709.scope
           │ └─32243 /usr/bin/kvm -id 709 -name sdmappsrv-audi.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/709.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,
           ├─712.scope
           │ └─35544 /usr/bin/kvm -id 712 -name sdmappsrv-daimler.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/712.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.so
           ├─714.scope
           │ └─41116 /usr/bin/kvm -id 714 -name sdmappsrv-pag.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/714.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,r
           ├─715.scope
           │ └─44091 /usr/bin/kvm -id 715 -name sdmappsrv-std.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/715.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,r
           ├─716.scope
           │ └─46225 /usr/bin/kvm -id 716 -name ci.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/716.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5
           ├─717.scope
           │ └─47628 /usr/bin/kvm -id 717 -name sdmappsrv-brose.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/717.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock
           ├─719.scope
           │ └─55414 /usr/bin/kvm -id 719 -name opnsense.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/719.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconn
           ├─720.scope
           │ └─62043 /usr/bin/kvm -id 720 -name zabbix.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/720.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnec
           └─725.scope
             └─14332 /usr/bin/kvm -id 725 -name dns-dhcp.pdtec.lan -chardev socket,id=qmp,path=/var/run/qemu-server/725.qmp,server,nowait -mon chardev=qmp,mode=control -chardev socket,id=qmp-event,path=/var/run/qmeventd.sock,reconn

Jul 31 09:15:54 proxmox7 systemd[1]: Created slice qemu.slice.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!