[SOLVED] unable to create zfs pool - mountpoint exists and is not empty

mocanub

Active Member
Dec 12, 2018
26
0
41
38
Hi everyone,

Has anybody bumped into this before? I have two unused 2TB SSDs which I would like to configure them to run as ZFS in a RAID1 configuration. The problem is that I can't create the ZFS disk with a specific name. I've also tried to switch the disks with new ones but it makes no difference.

Code:
root@pve-node-18:/etc/pve# wipefs -a /dev/sde
/dev/sde: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sde: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sde: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sde: calling ioctl to re-read partition table: Success
root@pve-node-18:/etc/pve# wipefs -a /dev/sdf
/dev/sdf: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdf: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdf: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdf: calling ioctl to re-read partition table: Success
root@pve-node-18:/etc/pve# zpool create -f -o 'ashift=12' backup_node_18 mirror sde sdf
mountpoint '/backup_node_18' exists and is not empty
use '-m' option to provide a different default
root@pve-node-18:/etc/pve# zpool list
NAME         SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
VM_NODE_18  1.81T  53.0G  1.76T        -         -     0%     2%  1.00x    ONLINE  -
rpool        110G  4.18G   106G        -         -     4%     3%  1.00x    ONLINE  -
root@pve-node-18:/etc/pve# zfs list
NAME                         USED  AVAIL     REFER  MOUNTPOINT
VM_NODE_18                   206G  1.55T       96K  /VM_NODE_18
VM_NODE_18/vm-121-disk-0    51.6G  1.59T     14.3G  -
VM_NODE_18/vm-30823-disk-0  51.6G  1.58T     26.7G  -
VM_NODE_18/vm-30823-disk-1  20.6G  1.57T     6.99G  -
VM_NODE_18/vm-30823-disk-2  20.6G  1.57T      285M  -
VM_NODE_18/vm-30823-disk-3  20.6G  1.57T     2.95G  -
VM_NODE_18/vm-30823-disk-4  20.6G  1.57T     1.26G  -
VM_NODE_18/vm-30823-disk-5  20.6G  1.57T      490M  -
rpool                       4.17G   102G      104K  /rpool
rpool/ROOT                  4.12G   102G       96K  /rpool/ROOT
rpool/ROOT/pve-1            4.12G   102G     4.12G  /
rpool/data                    96K   102G       96K  /rpool/data

Thanks in advance,
Bogdan M.
 
Hey,

could you post the output of ls -la /backup_node_18?
 
Hi Hannes,

Sure. Here you go:

Bash:
root@pve-node-18:/home/bogdan# ls -la /backup_node_18
total 10
drwxr-xr-x  3 root root  3 Aug 25 13:09 .
drwxr-xr-x 20 root root 26 Aug 27 14:51 ..
drwxr-xr-x  2 root root  2 Aug 25 13:09 dump

Regards,
Bogdan M
 
ZFS needs /backup_node_18(/<poolname> is the deafult) to be empty, so either clear the dir using rm -rf /backup_node_18 or tell it to use a different path for mounting with -m /<mount-path>. The complete command would look something like this:
Code:
zpool create -m /<some-path> -f -o 'ashift=12' backup_node_18 mirror sde sdf
 
  • Like
Reactions: mocanub
And looks like you already got a directory storage with VZdump backups content type set pointing to /backup_node_18.
Make sure to enable the is_mountpoint option using pvesm for that directory storage so it won't create folder and store backups there when the mountpoint isn't mounted.
When mounting the ZFS pool fails all those backups would end up on the root filesystem until it gets full and your node stops working.
 
Last edited:
And looks like you already got a directory storage with VZdump backups content type set pointing to /backup_node_18.
Make sure to enable the is_mountpoint option using pvesm for that directory storage so it won't create folder and store backups there when the mountpoint isn't mounted.
When mounting the ZFS pool fails all those backups would end up on the root filesystem until it gets full and your node stops working.
Hi Dunuin,

I was about to ask where the backups will end up if the zfs pool is not mounted. But your later edit clarified this for me.
I will set that option in that case.

Thank you,
Bogdan M.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!