yes. What am I looking for specifically? I don't know what this means.could you check your journal for messages from systemd about breaking cycles?
root@pve:~# journalctl -b --grep cycle
Jun 18 07:17:36 pve kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jun 18 07:17:36 pve kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3990bec8342, max_idle_ns: 881590769617 ns
Jun 18 07:17:36 pve kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jun 18 07:17:36 pve kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jun 18 07:17:36 pve kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3990bec8342, max_idle_ns: 881590769617 ns
systemd-analyze critical-chain <UNIT>
root@pve:~# systemd-analyze critical-chain pve-guests.service
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.
pve-guests.service +6min 17.731s
└─pve-ha-lrm.service @5.702s +403ms
└─pveproxy.service @4.952s +741ms
└─pvedaemon.service @4.329s +614ms
└─pve-cluster.service @3.307s +1.010s
└─rrdcached.service @3.281s +25ms
└─time-sync.target @3.280s
└─chrony.service @3.250s +29ms
└─network.target @3.241s
└─networking.service @2.229s +1.011s
└─local-fs.target @2.218s
└─etc-pve.mount @3.317s
└─local-fs-pre.target @169ms
└─lvm2-monitor.service @136ms +32ms
└─systemd-journald.socket @133ms
└─system.slice @117ms
└─-.slice @117ms
root@pve:~# systemd-analyze critical-chain pveproxy.service
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.
pveproxy.service +741ms
└─pvedaemon.service @4.329s +614ms
└─pve-cluster.service @3.307s +1.010s
└─rrdcached.service @3.281s +25ms
└─time-sync.target @3.280s
└─chrony.service @3.250s +29ms
└─network.target @3.241s
└─networking.service @2.229s +1.011s
└─local-fs.target @2.218s
└─etc-pve.mount @3.317s
└─local-fs-pre.target @169ms
└─lvm2-monitor.service @136ms +32ms
└─systemd-journald.socket @133ms
└─system.slice @117ms
└─-.slice @117ms
root@pve:~# systemd-analyze critical-chain pve-cluster.service
The time when unit became active or started is printed after the "@" character.
The time the unit took to start is printed after the "+" character.
pve-cluster.service +1.010s
└─rrdcached.service @3.281s +25ms
└─time-sync.target @3.280s
└─chrony.service @3.250s +29ms
└─network.target @3.241s
└─networking.service @2.229s +1.011s
└─local-fs.target @2.218s
└─etc-pve.mount @3.317s
└─local-fs-pre.target @169ms
└─lvm2-monitor.service @136ms +32ms
└─systemd-journald.socket @133ms
└─system.slice @117ms
└─-.slice @117ms
Very odd indeed that a seemingly "dormant" VM template pointing to an unconfigured storage (on it's ISO cd alone) - should cause this behavior.I think I found the cause
Maybe as a test - make a completely new VM template - with an ISO cd stored on a storage that you later remove.I believe I made all of the VMs as full-clones.
The template is not, but the VMs cloned from it are. I haven't had much time to drill down into an MRE, but I also have not been able to reproduce the issue when creating a brand new template as gfngfn256 suggested. I can still reproduce the problem with the existing template. It's interesting though, since I re-installed proxmox, none of the VMs or templates were carried over from the previous install, so something happens during the configuration that makes this reproducible. I should have time in the next couple days to work on this.is the VM with the dangling iso reference marked as start on boot as well? I can't reproduce this in any case..
I can definitely get these to you as well.anyhow, could you provide the following for a "good" and a "bad" boot:
We use essential cookies to make this site work, and optional cookies to enhance your experience.