[SOLVED] "One of the devices is part of an active md or lvm device" error on ZFS pool creation (dm-multipath)

Whatever

Renowned Member
Nov 19, 2012
393
63
93
I'm facing an issue with creating ZFS pool with dm-mappers (clean 6.3 PVE)

I have HP gen8 server with dual port HBA connected with two SAS cables to HP D3700 and dual port SAS SSD disks SAMSUNG 1649a

I've installed multipath-tools and changed multiapth.conf accordantly .
Code:
root@254-pve-03345:/dev/mapper# cat /etc/multipath.conf
defaults {
        find_multipaths no
        polling_interval 5
        path_selector "round-robin 0"
        path_grouping_policy multibus
        rr_min_io 100
        failback immediate
        no_path_retry 0
        user_friendly_names no
        uid_attribute "ID_SERIAL"
}

devices {
        device {
                vendor "SAMSUNG"
                product "MZILT3T8HBLS/007"
                uid_attribute "ID_WWN"
                path_grouping_policy group_by_serial 
                prio const
                path_checker directio
        }
}


blacklist {
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|nvme)[0-9]*"
        devnode "^(td|hd)[a-z]"
        devnode "^dcssblk[0-9]*"
        devnode "^cciss!c[0-9]d[0-9]*"
       wwid .*
}
blacklist_exceptions {
       wwid "5002538b*"
}
multipaths {
        multipath {
                wwid "5002538b*"

        }
}

Multipath daemon started and created dm devices and mappers correctly
Code:
root@254-pve-03345:/dev/mapper# multipath -ll
0x5002538b111bfc20 dm-27 SAMSUNG,MZILT3T8HBLS/007
size=3.5T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 2:0:38:0 sdao 66:128 active ready running
  `- 2:0:39:0 sdap 66:144 active ready running
0x5002538b111bfbc0 dm-18 SAMSUNG,MZILT3T8HBLS/007
size=3.5T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
  |- 2:0:32:0 sdai 66:32  active ready running
  `- 2:0:33:0 sdaj 66:48  active ready running

I've also changed lvm.conf to filter that devices on LVS scan.
Code:
...
        global_filter = [ "r|/dev/dm-.*|", "r|/dev/sd.*|", "r|/dev/zd.*|", "r|/dev/mapper/.*|", "r|/dev/mapper/.*-(vm|base)--[0-9]+--disk--[0-9]+|"]
        filter = [ "r/block/", "r/disk/", "r/sd./" ]
...


However, when I try to create ZFS pool with any dm- mappers names I get the following error:
Code:
root@254-pve-03345:/dev/mapper# zpool create -o ashift=13 test mirror /dev/mapper/0x5002538b111bfbc0 /dev/mapper/0x5002538b111bfc20
cannot open '/dev/mapper/0x5002538b111bfbc0': Device or resource busy
cannot open '/dev/mapper/0x5002538b111bfc20': Device or resource busy
cannot create 'test': one or more vdevs refer to the same device, or one of
the devices is part of an active md or lvm device

What did I miss? It drives me crazy(

Any suggestions would be very appreciated!
 
Did you make any progress?
cannot open '/dev/mapper/0x5002538b111bfbc0': Device or resource busy
Why is it busy? I think that is the interesting question here.
Do you get the same error from other commands? For example parted /dev/mapper/0x5002538b111bfbc0.
 
Did you make any progress?

Why is it busy? I think that is the interesting question here.
Do you get the same error from other commands? For example parted /dev/mapper/0x5002538b111bfbc0.

Yeap, I did some progress indeed

Unfortunately, I didn't manage to find out what caused "Device busy" - my assumption is that it somehow related to ZFS import (scan?) procedure that occurs on PVE (OS) start up (all the disks were a part of another ZFS pool from different storage without multipath - previous pool was created with /dev/disk/by-id/scsi-.. symlinks) - obvious previous pool was not active (imported) and even more it had been destroyed before disks were moved.

So I had to remove all partitions with parted (without any "Devices Busy" errors), wipe out with wipefs and reboot (for all 27 disks - that was a pain)))

After reboot I was able to create pool with "/dev/mapper/.." links sucessfully.
 
Last edited:
Okay. Thanks for the update :) I marked the thread as solved. It might help others who have a similar problem.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!