Hello everyone,
this week I've repeatedly encountered following behavior on LINSTOR / ZFS storage while testing new ZFS pool:
1) When deleting a VM disk (while destroying VM) backed by LINSTOR / ZFS, the operation fails with error message:
... and reasoning ends with ZFS layer message:
2) However, after manual verification:
ultimately removed, suggesting some kind of race condition rather than a real failure.
Environment:
Proxmox VE: 9.1.0
Kernel: 6.17.4-2-pve
linstor-proxmox: 8.2.0-1
linstor-controller/satellite: 1.33.1
Sample error message:
Test pool configuration:
LINSTOR topology:
this week I've repeatedly encountered following behavior on LINSTOR / ZFS storage while testing new ZFS pool:
1) When deleting a VM disk (while destroying VM) backed by LINSTOR / ZFS, the operation fails with error message:
Code:
Could not remove disk 'linstor_storage_yyy:pm-XXX_nnnn', check manually: API Return-Code: 500. Message: Could not delete resource pm-XXX, because: ...
Code:
cannot destroy 'zpool_15k_mirror/pm-XXXX_00000': dataset is busy
2) However, after manual verification:
- Proxmox VM (or) disk has been successfully removed despite error
- LINSTOR resource
pm-XXXno longer exists (it has been removed succesfully from all nodes) - ZFS zvol
pool-yyy/pm-XXX_nnnnis also gone on all storage nodes
ultimately removed, suggesting some kind of race condition rather than a real failure.
Environment:
Proxmox VE: 9.1.0
Kernel: 6.17.4-2-pve
linstor-proxmox: 8.2.0-1
linstor-controller/satellite: 1.33.1
Sample error message:
Code:
Could not remove disk 'linstor_storage_ZFS_15k:pm-dde91abc_5004', check manually: API Return-Code: 500. Message: Could not delete resource pm-dde91abc, because:
[{"ret_code":54001666,"message":"Resource definition 'pm-dde91abc' marked for deletion.","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.153694539+01:00"},{"ret_code":53739523,"message":"(pve1) Resource 'pm-dde91abc' [DRBD] adjusted.","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.282228854+01:00"},{"ret_code":54001667,"message":"Resource 'pm-dde91abc' on 'pve1' marked for deletion","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.282330268+01:00"},{"ret_code":53739522,"message":"(pve3) Resource 'pm-dde91abc' [DRBD] deleted.","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.332757089+01:00"},{"ret_code":54001667,"message":"Resource 'pm-dde91abc' on 'pve3' marked for deletion","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.332840311+01:00"},{"ret_code":53739523,"message":"(pve2) Resource 'pm-dde91abc' [DRBD] adjusted.","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.42067329+01:00"},{"ret_code":54001667,"message":"Resource 'pm-dde91abc' on 'pve2' marked for deletion","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.420732735+01:00"},{"ret_code":53739523,"message":"(pve1) Resource 'pm-dde91abc' [DRBD] adjusted.","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.492183168+01:00"},{"ret_code":54001667,"message":"Resource 'pm-dde91abc' on 'pve1' deleted","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.492298966+01:00"},{"ret_code":53739523,"message":"(pve2) Resource 'pm-dde91abc' [DRBD] adjusted.","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.619810865+01:00"},{"ret_code":54001667,"message":"Resource 'pm-dde91abc' on 'pve2' deleted","obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:41.619885904+01:00"},{"ret_code":-4611686018373385242,"message":"(pve1) Failed to delete zfs volume","details":"Command 'zfs destroy zpool_15k_mirror/pm-dde91abc_00000' returned with exitcode 1. \n\nStandard out: \n\n\nError message: \ncannot destroy 'zpool_15k_mirror/pm-dde91abc_00000': dataset is busy\n\n","error_report_ids":["69BAD5B9-8A94D-000002"],"obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:42.064215285+01:00"},{"ret_code":-4611686018373385242,"message":"(pve2) Failed to delete zfs volume","details":"Command 'zfs destroy zpool_15k_mirror/pm-dde91abc_00000' returned with exitcode 1. \n\nStandard out: \n\n\nError message: \ncannot destroy 'zpool_15k_mirror/pm-dde91abc_00000': dataset is busy\n\n","error_report_ids":["69BB9A77-F21CA-000002"],"obj_refs":{"RscDfn":"pm-dde91abc"},"created_at":"2026-03-19T15:34:42.285309706+01:00"}]
at /usr/share/perl5/PVE/Storage/Custom/LINSTORPlugin.pm line 498.
Code:
# zpool status zpool_15k_mirror -P
pool: zpool_15k_mirror
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zpool_15k_mirror ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
/dev/disk/by-id/scsi-35000c500****4e1f-part1 ONLINE 0 0 0
/dev/disk/by-id/scsi-35000c500****e46f-part1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
/dev/disk/by-id/scsi-35000c500****e6e3-part1 ONLINE 0 0 0
/dev/disk/by-id/scsi-35000c500****47eb-part1 ONLINE 0 0 0
logs
mirror-2 ONLINE 0 0 0
/dev/disk/by-id/nvme-eui.000000000000000100a****3f51-part2 ONLINE 0 0 0
/dev/disk/by-id/nvme-eui.000000000000000100a****4140-part2 ONLINE 0 0 0
LINSTOR topology:
- 2 storage nodes, direct 25 Gbps DRBD replication link,
- third diskless node as quorum witness / tie-braker.
Code:
root@pve1:~# linstor resource list --resource pm-2b4449d1
╭────────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Layers ┊ Usage ┊ Conns ┊ State ┊ CreatedOn ┊
╞════════════════════════════════════════════════════════════════════════════════════════╡
┊ pm-2b4449d1 ┊ pve1 ┊ DRBD,STORAGE ┊ Unused ┊ Ok ┊ UpToDate ┊ 2026-03-18 19:01:25 ┊
┊ pm-2b4449d1 ┊ pve2 ┊ DRBD,STORAGE ┊ Unused ┊ Ok ┊ UpToDate ┊ 2026-03-18 19:01:28 ┊
┊ pm-2b4449d1 ┊ pve3 ┊ DRBD,STORAGE ┊ Unused ┊ Ok ┊ TieBreaker ┊ 2026-03-18 19:01:26 ┊
╰────────────────────────────────────────────────────────────────────────────────────────╯