unable to create vm on ZFS over iSCSI storage

ramrod

New Member
Mar 30, 2024
7
1
3
I have a nas running OMV baremetal, and prox running baremetal on a separate machine. I successfully added the OMV zfs storage to the pve node per https://pve.proxmox.com/wiki/Storage:_ZFS_over_ISCSI . prox can see the drive, its enabled and active, however it wont let me create vms on it. I can create them on a small local storage within the pve machine.
The only error message from prox i get is:
"
cannot open '': name must begin with a letter
TASK ERROR: unable to create VM 101 - command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.1.99_id_rsa root@192.168.1.99 zfs create -s -b 4k -V 33554432k /storage/pve/vm-101-disk-0' failed: exit code 1
"
journalctl output is
"
Mar 30 09:33:31 pve pvedaemon[27283]: VM 101 creating disks failed
Mar 30 09:33:31 pve pvedaemon[27283]: unable to create VM 101 - command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.1.99_id_rsa root@192.168.1.99 zfs create -s -b 4k -V 33554432k /storage/pve/vm-101-disk-0' failed: exit code 1
Mar 30 09:33:31 pve pvedaemon[15039]: <root@pam> end task UPID:pve:00006A93:000F3C3D:660830CA:qmcreate:101:root@pam: unable to create VM 101 - command '/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.1.99_id_rsa root@192.168.1.99 zfs create -s -b 4k -V 33554432k /storage/pve/vm-101-disk-0' failed: exit code 1
"

Ive considered permissions issues on the OMV side, and have given full access to the share

any ideas? thanks
 
What happens if you run the command

Code:
/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.1.99_id_rsa root@192.168.1.99 zfs create -s -b 4k -V 33554432k /storage/pve/vm-101-disk-0

manually?
 
What happens if you run the command

Code:
/usr/bin/ssh -o 'BatchMode=yes' -i /etc/pve/priv/zfs/192.168.1.99_id_rsa root@192.168.1.99 zfs create -s -b 4k -V 33554432k /storage/pve/vm-101-disk-0

manually?
" cannot open '': name must begin with a letter "
which i recognize from the original error code. suggests something is named incorrectly which would make sense. what is that referring to?
 
I successfully added the OMV zfs storage to the pve node
Can you show the output of of "cat /etc/pve/storage.cfg" and "pvesm status" (as text in CODE tags).

Also try to run just zfs create -s -b 4k -V 33554432k /storage/pve/vm-101-disk-0 directly from NAS shell
Or, how about "zfs list" ?

I presume OMV is openmediavault? If so, it does not appear to be supporting ZFS natively?: https://docs.openmediavault.org/en/...stems.html#:~:text=Note-,ZFS,-Support for zfs

good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Can you show the output of of "cat /etc/pve/storage.cfg" and "pvesm status" (as text in CODE tags).

Also try to run just zfs create -s -b 4k -V 33554432k /storage/pve/vm-101-disk-0 directly from NAS shell
Or, how about "zfs list" ?

I presume OMV is openmediavault? If so, it does not appear to be supporting ZFS natively?: https://docs.openmediavault.org/en/stable/administration/storage/filesystems.html#:~:text=Note-,ZFS,-Support for zfs

good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
"zfs create -s -b 4k -V 33554432k /storage/pve/vm-101-disk-0" gives notification " cannot open '': name must begin with a letter " . seems something has a blank name which is probably my issue, but im not sure what that is referring to.

"/etc/pve/storage.cfg" screenshot attached.

theres an OMV-extra plugin for zfs support that I have installed.
 

Attachments

  • Screenshot 2024-03-30 115857.png
    Screenshot 2024-03-30 115857.png
    15.8 KB · Views: 14
"zfs create -s -b 4k -V 33554432k /storage/pve/vm-101-disk-0" gives notification " cannot open '': name must begin with a letter " . seems something has a blank name which is probably my issue, but im not sure what that is referring to.
Are you running the command with the "?

Can you also take a screenshot of the shell in which you want to run the command?
 
Are you running the command with the "?

Can you also take a screenshot of the shell in which you want to run the command?
nope, just copy/paste from the thread here.
is this screenshot what youre looking for?
 

Attachments

  • Screenshot 2024-03-30 122145.png
    Screenshot 2024-03-30 122145.png
    5.5 KB · Views: 16
Oh yeah ... the starting / of storage is wrong. Please fix it also in the storage view. Pools do not start with /
He is also running the zfs create test in the wrong place - PVE vs OMV.
But thats probably not important as normally it would be run over ssh by the system.



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Oh yeah ... the starting / of storage is wrong. Please fix it also in the storage view. Pools do not start with /
like this? now its not registering as active like it was with the starting / . openmediavault creates the zfs pool with the /storage by default im pretty sure.
 

Attachments

  • Screenshot 2024-03-30 124954.png
    Screenshot 2024-03-30 124954.png
    22.7 KB · Views: 13
  • Screenshot 2024-03-30 125244.png
    Screenshot 2024-03-30 125244.png
    38.3 KB · Views: 12
No,if you are creating zfs by default it is create with /poolname
right. in OMV i have my zfs pool named /storage. in which i created a share named "pve" so the pool i'm directing the pve storage to is named /storage/pve
 
If that's the share name, in PVE try & use just: pve
This is the ZFS structure:
https://itsfoss.com/what-is-zfs/#:~:text=ZFS will handle partitioning and formatting.

Unless I am misunderstanding something - there is no such thing as "share". It seems Op created a ZVOL.
However, Proxmox doesnt need a ZVOL, it needs a POOL. Proxmox will then try to create a ZVOL per LUN on its own.
The Pool name must conform to appropriate restrictions mentioned earlier.

good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: gfngfn256
You should also enable thin provisioning, AFAIK no downside if you don't overcomission much and often run fstrim inside of your guests.