Cannot create zfs pool

DosCorazones

New Member
Nov 23, 2015
8
0
1
Hi!

I'm trying to create a zpool on 2 disks, but i can't figure out how to get it right.

Basically i read the following:
if you have 2 disks that don't have any partition table, you can create a zpool with
zpool create poolname mirror /dev/sdX /dev/sdY

This command always fails as zpool creates partitions sdX1, sdX9, sdY1 and sdY9

Has anyone else had this problem and maybe knows a way to get around that?

Best regards!
 
Hi,
try command with -F
of it do not work pleas send the error message.
 
-F as capital letter results in invalid option
with -f i get the following error message

zpool create -f vmstorage mirror diskid1 diskid2
cannot resolve path 'diskid1-part1': 2
 
Hi,
could it be that your are in VM?
 
Nope,

I'm on the proxmox host.
I would have given the full disk ids but the forum backend tells me i'm not allowed to post links to images and such when i enter a longer path than /dev/sd*

As additional information, here is the partition table on the device after executing the command:

Code:
Disk /dev/sdk: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4E70590B-1B4C-AF47-B2DA-1D42799B4545


Device          Start        End    Sectors  Size Type
/dev/sdk1        2048 3907012607 3907010560  1.8T Solaris /usr & Apple ZFS
/dev/sdk9  3907012608 3907028991      16384    8M Solaris reserved 1
 
Try path by id /dev/disk/by-id/if this is not working please give me the output form
pveversion -v
 
Already tried referencing by id. Exactly the same Problem :/
Here is my pveversion output:
Code:
root@proxmox:~# pveversion -v
proxmox-ve: 4.0-22 (running kernel: 4.2.3-2-pve)
pve-manager: 4.0-57 (running version: 4.0-57/cc7c2b53)
pve-kernel-4.2.2-1-pve: 4.2.2-16
pve-kernel-4.2.3-2-pve: 4.2.3-22
lvm2: 2.02.116-pve1
corosync-pve: 2.3.5-1
libqb0: 0.17.2-1
pve-cluster: 4.0-24
qemu-server: 4.0-35
pve-firmware: 1.1-7
libpve-common-perl: 4.0-29
libpve-access-control: 4.0-9
libpve-storage-perl: 4.0-29
pve-libspice-server1: 0.12.5-2
vncterm: 1.2-1
pve-qemu-kvm: 2.4-12
pve-container: 1.0-21
pve-firewall: 2.0-13
pve-ha-manager: 1.0-13
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u1
lxc-pve: 1.1.4-3
lxcfs: 0.10-pve2
cgmanager: 0.39-pve1
criu: 1.6.0-1
zfsutils: 0.6.5-pve6~jessie
 
are all kernel moduls are loaded?

lsmod |grep zfs
zfs 2813952 13
zunicode 331776 1 zfs
zcommon 57344 1 zfs
znvpair 90112 2 zfs,zcommon
spl 102400 3 zfs,zcommon,znvpair
zavl 16384 1 zfs
 
seems to be correct

Code:
root@proxmox:~# lsmod |grep zfs
zfs                  2813952  3
zunicode              331776  1 zfs
zcommon                57344  1 zfs
znvpair                90112  2 zfs,zcommon
spl                   102400  3 zfs,zcommon,znvpair
zavl                   16384  1 zfs
 
Can you make an other fs on this disks?
 
I can. I Already tried to manually create the necessary file systems, so that sdh1 and sdh2 are there, but zpool automatically formats the disk itself.
 
Yes I know but can you make a fs like ext4 on the 2 disk?
Do you use a controller card? If yes what exactly?
 
Okay, I only created the partitions, but amended formatting them. This obiously fails.

I created a 100% partition and it gets listed as /dev/sdh1 in fdisk and /proc/partitions.
If i try to format with mkfs.ext4 /dev/sdh1 i get the error No such device or address

The disks are accessed through an LSI 9207 HBA
 
Then I would say the controller is the problem.
I'm sorry but I don't know.
The only thing I know LSI has some troubled Firmware's out there.
 
I am using other disks on this controller in a freenas vm quite fine. So basically it should work (the controller is NOT passed through. only the other disks are given to freenas by virtio)

I would love to test if a reboot would solve this problem, but atm i can't touch the system as a firewall vm is running :/

Any other hints that maybe could help me?