[SOLVED] Previously used zfs drives are not available

Curt Hall

Active Member
Jan 30, 2019
118
4
38
53
I have these 6 drives that are all set "Device Mapper" for usage, and consequently not available to reuse for a new ZFS.

1590342677754.png
1590342755182.png
1590342860836.png

I have looked for a fix on these forums to make them available but to no avail.
 

Attachments

  • 1590342638983.png
    1590342638983.png
    17.7 KB · Views: 26
root@proxmox3:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
├─sda1 8:1 0 931.5G 0 part
└─35000c500b8ba5d47 253:4 0 931.5G 0 mpath
sdb 8:16 0 931.5G 0 disk
└─35000c500964830a7 253:5 0 931.5G 0 mpath
sdc 8:32 0 931.5G 0 disk
└─35000c500964840eb 253:6 0 931.5G 0 mpath
sdd 8:48 0 931.5G 0 disk
└─35000c500b8ba6ba3 253:7 0 931.5G 0 mpath
sde 8:64 0 931.5G 0 disk
└─35000c500964a380b 253:2 0 931.5G 0 mpath
sdf 8:80 0 931.5G 0 disk
└─35000c5009f224cdf 253:3 0 931.5G 0 mpath
sdg 8:96 0 92.2G 0 disk
├─sdg1 8:97 0 1007K 0 part
├─sdg2 8:98 0 512M 0 part
└─sdg3 8:99 0 91.7G 0 part
├─pve-swap 253:0 0 8G 0 lvm [SWAP]
├─pve-root 253:1 0 22.8G 0 lvm /
├─pve-data_tmeta 253:8 0 1G 0 lvm
│ └─pve-data-tpool 253:10 0 47.6G 0 lvm
│ └─pve-data 253:11 0 47.6G 0 lvm
└─pve-data_tdata 253:9 0 47.6G 0 lvm
└─pve-data-tpool 253:10 0 47.6G 0 lvm
└─pve-data 253:11 0 47.6G 0 lvm
sdh 8:112 0 5T 0 disk
└─2f516411d15cedb126c9ce9008e5e60a7 253:12 0 5T 0 mpath
├─nimblenas-vm--117--disk--0 253:13 0 71.6G 0 lvm
├─nimblenas-vm--174--disk--0 253:14 0 62G 0 lvm
├─nimblenas-vm--170--disk--0 253:15 0 62G 0 lvm
├─nimblenas-vm--130--disk--0 253:16 0 70G 0 lvm
├─nimblenas-vm--133--disk--0 253:17 0 93G 0 lvm
├─nimblenas-vm--132--disk--0 253:18 0 74G 0 lvm
├─nimblenas-vm--121--disk--0 253:19 0 62G 0 lvm
├─nimblenas-vm--125--disk--0 253:20 0 75G 0 lvm
├─nimblenas-vm--137--disk--0 253:21 0 69.6G 0 lvm
├─nimblenas-vm--138--disk--0 253:22 0 62G 0 lvm
├─nimblenas-vm--139--disk--0 253:23 0 62G 0 lvm
├─nimblenas-vm--140--disk--0 253:24 0 69.6G 0 lvm
├─nimblenas-vm--141--disk--0 253:25 0 69.6G 0 lvm
├─nimblenas-vm--142--disk--0 253:26 0 69.6G 0 lvm
├─nimblenas-vm--143--disk--0 253:27 0 69.6G 0 lvm
├─nimblenas-vm--144--disk--0 253:28 0 62G 0 lvm
└─nimblenas-vm--145--disk--0 253:29 0 71.6G 0 lvm
sdi 8:128 0 5T 0 disk
└─2f516411d15cedb126c9ce9008e5e60a7 253:12 0 5T 0 mpath
├─nimblenas-vm--117--disk--0 253:13 0 71.6G 0 lvm
├─nimblenas-vm--174--disk--0 253:14 0 62G 0 lvm
├─nimblenas-vm--170--disk--0 253:15 0 62G 0 lvm
├─nimblenas-vm--130--disk--0 253:16 0 70G 0 lvm
├─nimblenas-vm--133--disk--0 253:17 0 93G 0 lvm
├─nimblenas-vm--132--disk--0 253:18 0 74G 0 lvm
├─nimblenas-vm--121--disk--0 253:19 0 62G 0 lvm
├─nimblenas-vm--125--disk--0 253:20 0 75G 0 lvm
├─nimblenas-vm--137--disk--0 253:21 0 69.6G 0 lvm
├─nimblenas-vm--138--disk--0 253:22 0 62G 0 lvm
├─nimblenas-vm--139--disk--0 253:23 0 62G 0 lvm
├─nimblenas-vm--140--disk--0 253:24 0 69.6G 0 lvm
├─nimblenas-vm--141--disk--0 253:25 0 69.6G 0 lvm
├─nimblenas-vm--142--disk--0 253:26 0 69.6G 0 lvm
├─nimblenas-vm--143--disk--0 253:27 0 69.6G 0 lvm
├─nimblenas-vm--144--disk--0 253:28 0 62G 0 lvm
└─nimblenas-vm--145--disk--0 253:29 0 71.6G 0 lvm
sdj 8:144 0 5T 0 disk
└─2f516411d15cedb126c9ce9008e5e60a7 253:12 0 5T 0 mpath
├─nimblenas-vm--117--disk--0 253:13 0 71.6G 0 lvm
├─nimblenas-vm--174--disk--0 253:14 0 62G 0 lvm
├─nimblenas-vm--170--disk--0 253:15 0 62G 0 lvm
├─nimblenas-vm--130--disk--0 253:16 0 70G 0 lvm
├─nimblenas-vm--133--disk--0 253:17 0 93G 0 lvm
├─nimblenas-vm--132--disk--0 253:18 0 74G 0 lvm
├─nimblenas-vm--121--disk--0 253:19 0 62G 0 lvm
├─nimblenas-vm--125--disk--0 253:20 0 75G 0 lvm
├─nimblenas-vm--137--disk--0 253:21 0 69.6G 0 lvm
├─nimblenas-vm--138--disk--0 253:22 0 62G 0 lvm
├─nimblenas-vm--139--disk--0 253:23 0 62G 0 lvm
├─nimblenas-vm--140--disk--0 253:24 0 69.6G 0 lvm
├─nimblenas-vm--141--disk--0 253:25 0 69.6G 0 lvm
├─nimblenas-vm--142--disk--0 253:26 0 69.6G 0 lvm
├─nimblenas-vm--143--disk--0 253:27 0 69.6G 0 lvm
├─nimblenas-vm--144--disk--0 253:28 0 62G 0 lvm
└─nimblenas-vm--145--disk--0 253:29 0 71.6G 0 lvm
sdk 8:160 0 5T 0 disk
└─2f516411d15cedb126c9ce9008e5e60a7 253:12 0 5T 0 mpath
├─nimblenas-vm--117--disk--0 253:13 0 71.6G 0 lvm
├─nimblenas-vm--174--disk--0 253:14 0 62G 0 lvm
├─nimblenas-vm--170--disk--0 253:15 0 62G 0 lvm
├─nimblenas-vm--130--disk--0 253:16 0 70G 0 lvm
├─nimblenas-vm--133--disk--0 253:17 0 93G 0 lvm
├─nimblenas-vm--132--disk--0 253:18 0 74G 0 lvm
├─nimblenas-vm--121--disk--0 253:19 0 62G 0 lvm
├─nimblenas-vm--125--disk--0 253:20 0 75G 0 lvm
├─nimblenas-vm--137--disk--0 253:21 0 69.6G 0 lvm
├─nimblenas-vm--138--disk--0 253:22 0 62G 0 lvm
├─nimblenas-vm--139--disk--0 253:23 0 62G 0 lvm
├─nimblenas-vm--140--disk--0 253:24 0 69.6G 0 lvm
├─nimblenas-vm--141--disk--0 253:25 0 69.6G 0 lvm
├─nimblenas-vm--142--disk--0 253:26 0 69.6G 0 lvm
├─nimblenas-vm--143--disk--0 253:27 0 69.6G 0 lvm
├─nimblenas-vm--144--disk--0 253:28 0 62G 0 lvm
└─nimblenas-vm--145--disk--0 253:29 0 71.6G 0 lvm

Disks im trying to add to a ZFS pool is sda - sdf
 
Tried that it, it doesn't change the status. They have no partition now that i can see. I assume I'm missing something that makes it "available", but can't figure it out. if you have a step by step sure fire way to do this, by all means let me know. I have already followed nearly every forum and google searched method and my proxmox box refuses to use the 6 disks.
 
I did not mean delete partition.
I ment go into gdisk(cli) look at disk structure.
It should list 3 option after message that partiotn table of type found.
Choose to create clean gpt.

This should remove anything on the disk and create an new clean gpt table.

You may need to remove and reinsert the disk after, but most of the time it is ready to use as is.
So step one: in host cli, gdisk /dev/sdX
Step 2 display partition table
Step 3 option to create new gpt.
Step 4 w to write all to disk and exit.
Done...
 
  • Like
Reactions: newbprox
Ran that on all 6 disks, which now show this:
root@proxmox3:~# gdisk /dev/sda
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): i
No partitions

When I try to run the zpool create I still see this:

root@proxmox3:~# zpool create -f -o ashift=12 RAID0 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
/dev/sda is in use and contains a unknown filesystem.
/dev/sdb is in use and contains a unknown filesystem.
/dev/sdc is in use and contains a unknown filesystem.
/dev/sdd is in use and contains a unknown filesystem.
/dev/sde is in use and contains a unknown filesystem.
/dev/sdf is in use and contains a unknown filesystem.
 
Do this drives show up differently in lsblk command after the cleanup?

The re-creating gpt table always worked for me.

The only other thing I did ones is zerro out the disk with dd.

dd if=/dev/zero of=/dev/hdX
This removes everything but is wery long process.
 
It does show differently in lsblk, but still doesn't cooperate:

root@proxmox3:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─35000c500b8ba5d47 253:4 0 931.5G 0 mpath
sdb 8:16 0 931.5G 0 disk
└─35000c500964830a7 253:5 0 931.5G 0 mpath
sdc 8:32 0 931.5G 0 disk
└─35000c500964840eb 253:6 0 931.5G 0 mpath
sdd 8:48 0 931.5G 0 disk
└─35000c500b8ba6ba3 253:7 0 931.5G 0 mpath
sde 8:64 0 931.5G 0 disk
└─35000c500964a380b 253:2 0 931.5G 0 mpath
sdf 8:80 0 931.5G 0 disk
└─35000c5009f224cdf 253:3 0 931.5G 0 mpath

I do not know the dd command, I have many other disks that I do not want to be dropped/deleted/corrupted etc. Just want to make sure the "dd" command will not affect those.
 
Don't know your situations but I usually take the disks out and use my other pc to manipulate them. This way I can always know what I am working with.
And yes you need to be very careful with dd command.


Also it seams that your disks are mounted somewhere. Check your system settings.

I think your disks are still mapped to lvm volumes somewhere.
 
Originally they were on this server in a hardware RAID10 then added to a zfs pool. I broke the RAID and enabled JBOD on the server, ever since I can't get Proxmox to see them as available so I can add them to RAIDZ. I am remote worker now, thanks to COVID, but we are going back to office in a couple weeks, I will have better access to remove and rework drives more readily then. Thanks for your input Jim!
 
Yeah, that is what it looks like. If you feel comfortable, do a double or even tripple check on drive I'd, I. E. Sda, sdb etc and run the DD on each and see if it solves the problem.

Pick 2 , run DD on then, than try creating a mirror zfs. If works break it and do same for the rest. Slow and steady.
 
this is what you gave me:

dd if=/dev/zero of=/dev/hdX

where do I place the /dev/sda for example?
 
Replace the of= part with your drive
The command is:
<Command> dd
<input> if=zero
<output> of=/dev/<your disk>


This will write "0" on whole disk. Effectively emptying out everything.
 
hmm, well I ran the "dd" on sda and either it really doesn't do anything, or it takes a really looooong time to complete. its been about 5 hours and nothing looks any different.
 
You shouldn't need to wipe the disks using dd - just clean out the MBR & GPT tables, all of the backup GPT copies and any LVM data.

"sgdisk --zap-all <device>" should do it.

After you've done that - or if you used the "dd" wipe - you have to get the system to re-trigger the device info for the disk. If its hot-swappable then pulling it out for a bit and re-plugging it will do. Reboot the host will do it too. Surely there is a less intrusive way but I don't know it.

ps - if you do the DD as shown above then you are doing writing byte-at-a-time IO to the disk. Buffered, of course, but still a lot slower than you really want. It will go quite a bit faster if you use block writes.

"dd ibs=81920 obs=81920 if=/dev/zero of=<device> status=progress"

This version will do block writes 20 disk sectors at a time (assuming 4k blocks - 160 sectors at time for 512 byte disks). This is LOTS faster - and adding the progress status helps you watch it go.
 
i ran the sgdisk you mentioned above, but the disks still are not available for a zpool creation:

root@proxmox3:~# zpool create -f -o ashift=12 RAID0 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
/dev/sda is in use and contains a unknown filesystem.
/dev/sdb is in use and contains a unknown filesystem.
/dev/sdc is in use and contains a unknown filesystem.
/dev/sdd is in use and contains a unknown filesystem.
/dev/sde is in use and contains a unknown filesystem.
/dev/sdf is in use and contains a unknown filesystem.
 
Try doing a "vgscan" to re-scan the LVM cache. They originally had LVM data on them (lsblk in post #3) and it may still be registered in the LVM cache.
 
I did that, it makes no difference. Also, in the GUI they still show as Device Mapper under "Usage":

1590526409926.png
 
Odd. Last suggestion is to reboot the Proxmox host now that they are wiped and see if it clears anything that is still stuck. After that I'm at a loss.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!