Rebooting nodes isnt something I do regularly so I really dont know whether it is something that is recent or not. It just became very apparent with my recent issues whilst trying sort sort out the upgrade and HA (my other post on Node waiting for lock) as I rebooted all nodes multiple times.
root@agree-92:~# systemctl status lvm2-monitor
● lvm2-monitor.service - Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polli
Loaded: loaded (/lib/systemd/system/lvm2-monitor.service; enabled; vendor preset: enabled)
Active: active (exited) since Wed 2021-02-24 10:09:47 NZDT; 1 day 21h ago
Docs: man:dmeventd(8)
man:lvcreate(8)
man:lvchange(8)
man:vgchange(8)
Process: 496 ExecStart=/sbin/lvm vgchange --monitor y (code=exited, status=0/SUCCESS)
Main PID: 496 (code=exited, status=0/SUCCESS)
Feb 24 10:09:47 agree-92 lvm[496]: 5 logical volume(s) in volume group "pve" monitored
Feb 24 10:09:47 agree-92 systemd[1]: Started Monitoring of LVM2 mirrors, snapshots etc. using dmeven
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
Yes, I am using LVM, all of my storage is on an array mounted with multipath iSCSI and the individual VMs use LVM volumes. I am not using BtrFS as far as I am aware. it would appear that lvm2-monitor is something to do with BtrFS snapshots. Maybe I can just 'systemctl disable lvm2-monitor' ?
--- Logical volume ---
LV Path /dev/ADCSAAS/vm-145-disk-1
LV Name vm-145-disk-1
VG Name ADCSAAS
LV UUID jyPiDV-c2Ja-778e-RcmU-8vtX-T0bQ-3ApVLX
LV Write Access read/write
LV Creation host, time agree-92, 2020-11-21 07:59:03 +1300
LV Status available
# open 1
LV Size 20.00 GiB
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:11
Failing to shutdown isnt too much of an issue as I typically am watching the node's console via iLo but it would be nice to get this sorted so I could reboot without having to watch