ZFS kvm device cannot be destroyed after migrating to other node

dirk.nilius

Member
Nov 5, 2015
47
0
6
Berlin, Germany
Hi,

after migrating a KVM machine to another node the migration fails with an zfs error. I am not able to destroy the image anymore. But I was able to raname it.

zfs destroy -f rpool/test
cannot destroy 'rpool/test': dataset is busy

Same behavior after reboot. The is nothing that can have a handle on this device. The VM config is gone. Looks like: https://github.com/zfsonlinux/zfs/issues/3735

zfs get all rpool/test
NAME PROPERTY VALUE SOURCE
rpool/test type volume -
rpool/test creation Thu Oct 29 15:18 2015 -
rpool/test used 103G -
rpool/test available 784G -
rpool/test referenced 21.2G -
rpool/test compressratio 1.16x -
rpool/test reservation none default
rpool/test volsize 100G local
rpool/test volblocksize 8K -
rpool/test checksum on default
rpool/test compression lz4 inherited from rpool
rpool/test readonly off local
rpool/test copies 1 default
rpool/test refreservation 103G local
rpool/test primarycache all default
rpool/test secondarycache all default
rpool/test usedbysnapshots 0 -
rpool/test usedbydataset 21.2G -
rpool/test usedbychildren 0 -
rpool/test usedbyrefreservation 82.0G -
rpool/test logbias latency default
rpool/test dedup off default
rpool/test mlslabel none default
rpool/test sync standard inherited from rpool
rpool/test refcompressratio 1.16x -
rpool/test written 21.2G -
rpool/test logicalused 24.4G -
rpool/test logicalreferenced 24.4G -
rpool/test snapshot_limit none default
rpool/test snapshot_count none default
rpool/test snapdev hidden default
rpool/test context none default
rpool/test fscontext none default
rpool/test defcontext none default
rpool/test rootcontext none default
rpool/test redundant_metadata all default
rpool/test shareiscsi off default


Anyone else with same issue or a solution?
 
Hi is a lvm on this zvol.
If yes please disable the lvm scan for dev/zd* in /etc/lvm/lvm.conf
filter = [ "r|/dev/zd*|" ]
 
Thx Wolfgang, that solves the problem. The LVM package comes from the proxmox repo, right? So you should concider to add this to the default config as LVM on Linux guests is a very common pattern.
 
i am having the same error message with a disk move with delete source. filter = [ "r|/dev/zd*|" ] was already set . when I execute zfs destroy after a minute or so it works. seems to be a timing issue
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!