volume size must be a multiple of volume block size

sahostking

Renowned Member
Hi

I setup proxmox with ZFS RaidZ2.

NAME USED AVAIL REFER MOUNTPOINT
rpool 67.2G 3.44T 192K /rpool
rpool/ROOT 1.25G 3.44T 192K /rpool/ROOT
rpool/ROOT/pve-1 1.25G 3.44T 1.25G /
rpool/swap 65.9G 3.51T 128K -


Now I am trying to restore a backup from SATA servers to this new one SSD. When I restore a RAW file all is well but when trying to restore a QCOW2 I think I see with this error:

Search:
restore vma archive: lzop -d -c /mnt/pve/nfs-storage/dump/vzdump-qemu-100-2016_04_16-00_04_12.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp6019.fifo - /var/tmp/vzdumptmp6019
CFG: size: 431 name: qemu-server.conf
DEV: dev_id=1 size: 53687091712 devname: drive-virtio0
DEV: dev_id=2 size: 32212255232 devname: drive-virtio1
CTIME: Sat Apr 16 00:04:13 2016
TASK ERROR: command 'lzop -d -c /mnt/pve/nfs-storage/dump/vzdump-qemu-100-2016_04_16-00_04_12.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp6019.fifo - /var/tmp/vzdumptmp6019' failed: zfs error: cannot create 'rpool/vm-100-disk-1': volume size must be a multiple of volume block size

Ant ideas? Note all I did was in proxmox setup select all disks and select RaidZ2. Then when booted just added the ZFS storage via PRoxmox gui and the NFS storage for one of the backup servers and tried to restore.
 
Seems like qcow2 restore issue on ZFS. I know RAW is recommended but thought I could temporarily be restored as qcow2 then later convert them to RAW as they are TBs of data.

Another error I get, note just installed proxmox and its on raidz2 with SSD - all done through gui. No changes whatsoever.

Running as unit 100.scope.
kvm: -drive file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-ide0,format=qcow2,cache=none,aio=native,detect-zeroes=on: file system may not support O_DIRECT
kvm: -drive file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-ide0,format=qcow2,cache=none,aio=native,detect-zeroes=on: Could not open '/var/lib/vz/images/100/vm-100-disk-1.qcow2': Invalid argument
TASK ERROR: start failed: command '/usr/bin/systemd-run --scope --slice qemu --unit 100 -p 'KillMode=none' -p 'CPUShares=1000' /usr/bin/kvm -id 100 -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=b9ae5f06-7f2c-400b-ae34-cc561196f85c' -name vdns1.hkdns.co.za -smp '1,sockets=1,cores=1,maxcpus=1' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000' -vga cirrus -vnc unix:/var/run/qemu-server/100.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 512 -k en-us -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:4262af5da7b6' -drive 'file=/var/lib/vz/images/100/vm-100-disk-1.qcow2,if=none,id=drive-ide0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0,bootindex=100' -drive 'if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=36:61:34:39:64:32,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300'' failed: exit code
 
file system may not support O_DIRECT

just set the cache mode to "write-through" and it will work.

but much better, use the zfs storage plugin.
 
I am using the ZFS storage plugin.

Nevertheless I change to writethrough and that worked. Will convert all these VMS to Raw files tomorrow as that runs better on ZFS anyway I think.
 
Hi

I setup proxmox with ZFS RaidZ2.

NAME USED AVAIL REFER MOUNTPOINT
rpool 67.2G 3.44T 192K /rpool
rpool/ROOT 1.25G 3.44T 192K /rpool/ROOT
rpool/ROOT/pve-1 1.25G 3.44T 1.25G /
rpool/swap 65.9G 3.51T 128K -


Now I am trying to restore a backup from SATA servers to this new one SSD. When I restore a RAW file all is well but when trying to restore a QCOW2 I think I see with this error:

Search:
restore vma archive: lzop -d -c /mnt/pve/nfs-storage/dump/vzdump-qemu-100-2016_04_16-00_04_12.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp6019.fifo - /var/tmp/vzdumptmp6019
CFG: size: 431 name: qemu-server.conf
DEV: dev_id=1 size: 53687091712 devname: drive-virtio0
DEV: dev_id=2 size: 32212255232 devname: drive-virtio1
CTIME: Sat Apr 16 00:04:13 2016
TASK ERROR: command 'lzop -d -c /mnt/pve/nfs-storage/dump/vzdump-qemu-100-2016_04_16-00_04_12.vma.lzo|vma extract -v -r /var/tmp/vzdumptmp6019.fifo - /var/tmp/vzdumptmp6019' failed: zfs error: cannot create 'rpool/vm-100-disk-1': volume size must be a multiple of volume block size

Ant ideas? Note all I did was in proxmox setup select all disks and select RaidZ2. Then when booted just added the ZFS storage via PRoxmox gui and the NFS storage for one of the backup servers and tried to restore.

This error seems to step from the attached image.

Its then the values are not 100G but 100303434034 for example.

Any safe way to fix this? Think those occured when disks were resized through Proxmox gui before.
 

Attachments

  • Untitled.png
    Untitled.png
    6.8 KB · Views: 8
Edited this file on old server:

/etc/pve/nodes/vz-jhb-2/qemu-server/108.conf

and changed values from the weird ones like:

ide0: local:108/vm-108-disk-1.qcow2,size=53476736K
ide1: local:108/vm-108-disk-2.qcow2,size=21474836992
ide2: local:108/vm-108-disk-3.qcow2,size=64424509952
ide3: local:108/vm-108-disk-4.qcow2,size=300G

to

ide0: local:108/vm-108-disk-1.qcow2,size=50G
ide1: local:108/vm-108-disk-2.qcow2,size=20G
ide2: local:108/vm-108-disk-3.qcow2,size=60G
ide3: local:108/vm-108-disk-4.qcow2,size=300G

Rerunning backup now - think it should work. Then will restore on the new server.