OS not shutting down 'a stop job is running for monitoring of lvm2 mirrors'

I have found a number of posts that seem to suggest I need 'use_lvmetad=0' is /etc/lvm/lvm.conf
As far as I know, this has been deprecated, so may not be the root of the problem.
Could you post the output of pveversion -v and provide some information on your system? Are you shutting down while VMs are still running?
 
The servers are HP DL360s, the VMs have all been migrated off the node prior to the shutdown. The message is as per the post's title and the node will wait at that point forever. I let it sit for 10mins before power cycling to bring it back up. Prior to putting in multipath iSCSI, I dont recall them doing this but that was over a year ago.

Code:
root@agree-92:~# pveversion -v


proxmox-ve: 6.3-1 (running kernel: 5.4.98-1-pve)


pve-manager: 6.3-4 (running version: 6.3-4/0a38c56f)


pve-kernel-5.4: 6.3-5


pve-kernel-helper: 6.3-5


pve-kernel-5.3: 6.1-6


pve-kernel-5.4.98-1-pve: 5.4.98-1


pve-kernel-5.4.78-2-pve: 5.4.78-2


pve-kernel-5.4.44-2-pve: 5.4.44-2


pve-kernel-5.3.18-3-pve: 5.3.18-3


pve-kernel-4.10.17-2-pve: 4.10.17-20


ceph-fuse: 12.2.11+dfsg1-2.1+b1


corosync: 3.1.0-pve1


criu: 3.11-3


glusterfs-client: 5.5-3


ifupdown: 0.8.35+pve1


ksm-control-daemon: 1.3-1


libjs-extjs: 6.0.1-10


libknet1: 1.20-pve1


libproxmox-acme-perl: 1.0.7


libproxmox-backup-qemu0: 1.0.3-1


libpve-access-control: 6.1-3


libpve-apiclient-perl: 3.1-3


libpve-common-perl: 6.3-4


libpve-guest-common-perl: 3.1-5


libpve-http-server-perl: 3.1-1


libpve-storage-perl: 6.3-7


libqb0: 1.0.5-1


libspice-server1: 0.14.2-4~pve6+1


lvm2: 2.03.02-pve4


lxc-pve: 4.0.6-2


lxcfs: 4.0.6-pve1


novnc-pve: 1.1.0-1


proxmox-backup-client: 1.0.8-1


proxmox-mini-journalreader: 1.1-1


proxmox-widget-toolkit: 2.4-5


pve-cluster: 6.2-1


pve-container: 3.3-4


pve-docs: 6.3-1


pve-edk2-firmware: 2.20200531-1


pve-firewall: 4.1-3


pve-firmware: 3.2-2


pve-ha-manager: 3.1-1


pve-i18n: 2.2-2


pve-qemu-kvm: 5.2.0-1


pve-xtermjs: 4.7.0-3


qemu-server: 6.3-5


smartmontools: 7.1-pve2


spiceterm: 3.1-1


vncterm: 1.6-2


zfsutils-linux: 2.0.3-pve1
 
Prior to putting in multipath iSCSI, I dont recall them doing this but that was over a year ago.
Has the problem been present since setting up multipath iSCSI a year ago or is it still a relatively new problem?

Could you check the lvm2-monitor status (systemctl status lvm2-monitor) and the system logs (journalctl -u lvm2-monitor) for anything out of the ordinary?

And you are using lvm right?
 
Last edited:
Rebooting nodes isnt something I do regularly so I really dont know whether it is something that is recent or not. It just became very apparent with my recent issues whilst trying sort sort out the upgrade and HA (my other post on Node waiting for lock) as I rebooted all nodes multiple times.

root@agree-92:~# systemctl status lvm2-monitor
lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polli
Loaded: loaded (/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
Active: active (exited) since Wed 2021-02-24 10:09:47 NZDT; 1 day 21h ago
Docs: man:dmeventd(8)
man:lvcreate(8)
man:lvchange(8)
man:vgchange(8)
Process: 496 ExecStart=/sbin/lvm vgchange --monitor y (code=exited, status=0/SUCCESS)
Main PID: 496 (code=exited, status=0/SUCCESS)

Feb 24 10:09:47 agree-92 lvm[496]: 5 logical volume(s) in volume group "pve" monitored
Feb 24 10:09:47 agree-92 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeven
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

Yes, I am using LVM, all of my storage is on an array mounted with multipath iSCSI and the individual VMs use LVM volumes. I am not using BtrFS as far as I am aware. it would appear that lvm2-monitor is something to do with BtrFS snapshots. Maybe I can just 'systemctl disable lvm2-monitor' ?

--- Logical volume ---
LV Path /dev/ADCSAAS/vm-145-disk-1
LV Name vm-145-disk-1
VG Name ADCSAAS
LV UUID jyPiDV-c2Ja-778e-RcmU-8vtX-T0bQ-3ApVLX
LV Write Access read/write
LV Creation host, time agree-92, 2020-11-21 07:59:03 +1300
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:11

Failing to shutdown isnt too much of an issue as I typically am watching the node's console via iLo but it would be nice to get this sorted so I could reboot without having to watch
 
I am not using BtrFS as far as I am aware. it would appear that lvm2-monitor is something to do with BtrFS snapshots. Maybe I can just 'systemctl disable lvm2-monitor' ?
I also seen some mention of this online, but am unsure as to where they took the information from or why the two would be related. However, in the man pages for lvmthin, the following is stated, which would make me hesitant about disabling it.
"The lvm daemon dmeventd (lvm2-monitor) monitors the data usage of thin pool LVs and extends them when the usage reaches a certain level. The necessary free space must exist in the VG to extend thin pool LVs. Monitoring and extension of thin pool LVs are controlled independently."

In terms of what's causing the issue, I'm gonna need some more time to look into it.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!