[SOLVED] Unable to parse zfs volume

ororokorebuh

New Member
Nov 16, 2024
13
0
1
Hello. Please could you help me with this situation?

I created ZFS with name data2:
zfs.jpg
I SSH into my proxmox and can see this folder:
data2.jpg
So I presume this folder represents content of data2 ZFS. I uploaded file with name cecko.qcow2 into this folder. This is windows server image converted form VHDX.

I have a VM where I created SATA1 hard disk with path: data2:cecko.qcow2 (added line in .conf file)
I set this SATA1 as boot disc in options.
vm.jpg


When I try to start this VM it shows error: TASK ERROR: unable to parse zfs volume name 'cecko.qcow2'

Please any idea what could be wrong? Thank you.
 
Last edited:
Can you post your storage config please?
Code:
pvesm status
and
Code:
cat /etc/pve/storage.cfg

QCOW2 can only be used on directory storage [0]. It is not recommended to use ZFS underneath (cow on cow). If you use ZFS as storage backend [1], the virtual disk must also be imported/converted as zvol. For example:

Code:
qm importdisk <vmid> <exportet image> <target storage for your vm's>

[0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_directory
[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_zfspool
 
Name Type Status Total Used Available %
data1 zfspool active 483655680 408835908 74819772 84.53%
data2 zfspool active 967311360 564684084 402627276 58.38%
local dir active 98497780 2806124 90642108 2.85%
local-lvm lvmthin active 354275328 0 354275328 0.00%

................................................................

root@proxmox:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content vztmpl,backup,iso

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

zfspool: data1
pool data1
content images,rootdir
mountpoint /data1
nodes proxmox

zfspool: data2
pool data2
content rootdir,images
mountpoint /data2
nodes proxmox
 
Seems to me like data1 and data2 are usable - or not?

Here I´m little bit lost. PLease could you point me to exact steps I need to do?
 
Thank you for the config. And yes you can use data2 and data1 for your images. But not as file especially as zvol. So this steps should work for you:

  1. Import your qcow2 as zvol: qm importdisk 100 /path/to/your/cecko.qcow2 data2
  2. The VMdisk should now appear as an unused disk in the VMconfig.
  3. Add the unused disk to your VM as active and used disk
  4. Start your VM and test if it is booting now
  5. If everything is ok, delete your qcow2
See also qm help importdisk.
 
OK will try. Does it also mean that I could (in the future) make convert from VHDX to RAW or converting VHDX to QCOW2 is the right way?
 
You do not need to convert your VHDX for this import. You can import it directly. It is then converted into a zvol (raw) automatically during import.
 
  • Like
Reactions: ororokorebuh
And last question. This VHDX represents one hdd from original server (C) and I have yet another VHDX which represents second hdd (D). I want to import it to data1 (ZFS). The command to import this second vhdx will be: qm importdisk 100 /path/to/your/second.vhdx data1 ?
I mean the "100" in command is the VM ID and will be the same? Thank you.
 
Problem with command: qm importdisk 100 /mnt/USB_Data/HNDSRVR-C.VHDX data2

The file on USB hdd has 779GB. As you can see data2 has 991GB free space. But the command above shows: zfs error: cannot create 'data2/vm-100-disk-0': out of space


root@proxmox:/# zfs list -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
data1 461G 3.08M 0B 96K 0B 2.99M
data2 922G 3.56M 0B 96K 0B 3.47M


Filesystem Size Used Avail Use% Mounted on
udev 68G 0 68G 0% /dev
tmpfs 14G 1.6M 14G 1% /run
/dev/mapper/pve-root 101G 2.9G 93G 4% /
tmpfs 68G 36M 68G 1% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
efivarfs 263k 52k 206k 21% /sys/firmware/efi/efivars
/dev/nvme0n1p2 1.1G 13M 1.1G 2% /boot/efi
/dev/fuse 135M 17k 135M 1% /etc/pve
data1 496G 132k 496G 1% /data1
data2 991G 132k 991G 1% /data2
/dev/sde2 2.1T 1.3T 771G 62% /mnt/USB_Data
tmpfs 14G 0 14G 0% /run/user/0

My ZFS´s are MIRROR mode.
 
Last edited:
And last question. This VHDX represents one hdd from original server (C) and I have yet another VHDX which represents second hdd (D). I want to import it to data1 (ZFS). The command to import this second vhdx will be: qm importdisk 100 /path/to/your/second.vhdx data1 ?
I mean the "100" in command is the VM ID and will be the same? Thank you.
Yes, you are right.

The file on USB hdd has 779GB. As you can see data2 has 991GB free space. But the command above shows: zfs error: cannot create 'data2/vm-100-disk-0': out of space
If you did no have activated “sparse” (default), the entire size of the image is reserved [1]. I assume the maximum image size is larger than the available space of your ZFS storage data2. The solution to your problem would most likely be to activate “sparse”. This is called “Thin provision” in the WebUI.

Screenshot_20241119_163030.png
Please note that this change only affects newly imported/created images. Existing images remain unaffected. This command can be used to remove the reservation from existing images.

Code:
zfs set reservation=none refreservation=none <zpool>/<volume der VM>


[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#storage_zfspool
 
  • Like
Reactions: ororokorebuh
BUMP - you are right! Thank you.
The solution to your problem would most likely be to activate “sparse”. This is called “Thin provision” in the WebUI.