LXC Container do not start after latest Kernel Upgrade to 7.0.2-2

Hi,
please share the system journal from the problematic boot. What output do you get when you run the following commands?
Code:
systemd-tmpfiles --dry-run --create 2>&1 | grep pve
systemd-tmpfiles --cat-config | grep pve
pveversion -v
 
Here everything worked fine updating from 7.0.0-3 to 7.0.2-2.
No-subscription repo, single node PVE with 11 LXC.

Do not know if that will help, maybe just for reference:
Code:
root@pve-i5:~# systemd-tmpfiles --cat-config | grep pve
# /usr/lib/tmpfiles.d/pve-manager.conf
d     /run/pve 0750 root www-data  -   -

root@pve-i5:~# systemd-tmpfiles --dry-run --create 2>&1 | grep pve
Would create directory /run/pve

root@pve-i5:~# pveversion -v
proxmox-ve: 9.1.0 (running kernel: 7.0.2-2-pve)
pve-manager: 9.1.9 (running version: 9.1.9/ee7bad0a3d1546c9)
proxmox-kernel-helper: 9.0.4
proxmox-kernel-7.0: 7.0.2-2
proxmox-kernel-7.0.2-2-pve-signed: 7.0.2-2
proxmox-kernel-7.0.0-3-pve-signed: 7.0.0-3
amd64-microcode: 3.20251202.1~bpo13+1
ceph-fuse: 19.2.3-pve4
corosync: 3.1.10-pve2
criu: 4.1.1-1
frr-pythontools: 10.4.1-1+pve1
ifupdown2: 3.3.0-1+pmx12
intel-microcode: 3.20251111.1~deb13u1
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-5
libproxmox-acme-perl: 1.7.1
libproxmox-backup-qemu0: 2.0.2
libproxmox-rs-perl: 0.4.1
libpve-access-control: 9.0.7
libpve-apiclient-perl: 3.4.2
libpve-cluster-api-perl: 9.1.2
libpve-cluster-perl: 9.1.2
libpve-common-perl: 9.1.11
libpve-guest-common-perl: 6.0.2
libpve-http-server-perl: 6.0.5
libpve-network-perl: 1.3.0
libpve-notify-perl: 9.1.2
libpve-rs-perl: 0.13.0
libpve-storage-perl: 9.1.2
libspice-server1: 0.15.2-1+b1
lvm2: 2.03.31-2+pmx1
lxc-pve: 6.0.5-4
lxcfs: 6.0.4-pve1
novnc-pve: 1.6.0-4
proxmox-backup-client: 4.2.0-1
proxmox-backup-file-restore: 4.2.0-1
proxmox-backup-restore-image: 1.0.0
proxmox-firewall: 1.2.2
proxmox-kernel-helper: 9.0.4
proxmox-mail-forward: 1.0.3
proxmox-mini-journalreader: 1.6
proxmox-widget-toolkit: 5.1.9
pve-cluster: 9.1.2
pve-container: 6.1.5
pve-docs: 9.1.2
pve-edk2-firmware: 4.2025.05-2
pve-esxi-import-tools: 1.0.1
pve-firewall: 6.0.4
pve-firmware: 3.18-3
pve-ha-manager: 5.2.0
pve-i18n: 3.7.1
pve-qemu-kvm: 10.1.2-7
pve-xtermjs: 5.5.0-3
qemu-server: 9.1.9
smartmontools: 7.4-pve1
spiceterm: 3.4.2
swtpm: 0.8.0+pve3
vncterm: 1.9.2
zfsutils-linux: 2.4.1-pve1
 
It seems that there is a race condition regarding systemd-tmpfiles and several ZFS related services. We are using ZFS on LUKS and a systemd service to unlock the LUKS partitions. Curiously this didn't happened before. I'll try to fix this.