I'm using 5.0-10/0d270679 BETA. This morning, on the system console, I saw the above error message. I can also see it if I 'journalctl -xb'. It's a kernel error message.
Googling, this seems a bit strange: it appears to be mostly related to errors on systems dual booting with Windows Logical Disk Manager volumes. However, this is a straight Proxmox install from the Proxmox ISO (not an install over Debian) and there has never been Windows installed on any attached drives, let alone a dual boot setup. I'm using a ZFS mirror of two SSDs to boot from, and I have four WD REDs and an Intel 320 slog in a ZFS pool for most data, and have these set up as a ZFS storage in Proxmox.
I have recently restored a Windows *guest* with drives managed by Windows LDM from backup to the main ZFS pool, and extended one of the virtual disks via Proxmox and then extended the volume into the free space on Linux. This has obviously created/amended ZFS volumes which I guess may be visible to the Linux kernel as block devices. Is it possible the LDM driver could be trying to do something with these block devices (possibly even trying to mount them?), and if so do I need to worry about possible corruption of the guest etc.?
It seems a little odd even to have LDM functionality compiled into the Proxmox kernel - presumably it would be a very niche use case to have Proxmox dual booting with Windows, and even then it's hard to see why users would need to access LDM drives.
Googling, this seems a bit strange: it appears to be mostly related to errors on systems dual booting with Windows Logical Disk Manager volumes. However, this is a straight Proxmox install from the Proxmox ISO (not an install over Debian) and there has never been Windows installed on any attached drives, let alone a dual boot setup. I'm using a ZFS mirror of two SSDs to boot from, and I have four WD REDs and an Intel 320 slog in a ZFS pool for most data, and have these set up as a ZFS storage in Proxmox.
I have recently restored a Windows *guest* with drives managed by Windows LDM from backup to the main ZFS pool, and extended one of the virtual disks via Proxmox and then extended the volume into the free space on Linux. This has obviously created/amended ZFS volumes which I guess may be visible to the Linux kernel as block devices. Is it possible the LDM driver could be trying to do something with these block devices (possibly even trying to mount them?), and if so do I need to worry about possible corruption of the guest etc.?
It seems a little odd even to have LDM functionality compiled into the Proxmox kernel - presumably it would be a very niche use case to have Proxmox dual booting with Windows, and even then it's hard to see why users would need to access LDM drives.
Last edited: