But except for the size mismatch, the images are identical. I.e.Code:root@saori:~# qemu-img compare /dev/zvol/FileZilla/vm-4052-disk-0 /dev/zvol/FileZilla2/vm-9000-disk-0 Warning: Image size mismatch! Images are identical.
man qemu-img
says:By default, images with different size are considered identical if the larger image contains only unallocated and/or zeroed sectors in the area after the end of the other image.
qm config
scsi0: FileZilla2:vm-1003-disk-0,size=44544M
scsi0: /dev/zvol/FileZilla2/vm-1003-disk-0,cache=none,aio=native
The size parameter in the configuration is informational only and used for all Proxmox VE managed volumes.Code:qm config scsi0: FileZilla2:vm-1003-disk-0,size=44544M
Are you sure this is the right syntax?
AFK the "size=44544M" param is valid for image-files, you using block-device directly.
It's better to use a Proxmox VE managed volume rather than pass-through. E.g. you won't be able to live migrate like this.Possible:
Code:scsi0: /dev/zvol/FileZilla2/vm-1003-disk-0,cache=none,aio=native
root@pve8a1 ~ # qm config 117 | grep vm-117-disk-1
scsi2: zfs:vm-117-disk-1,discard=on,iothread=1,size=1G
root@pve8a1 ~ # qm showcmd 117 --pretty | grep vm-117-disk-1
-drive 'file=/dev/zvol/zfs/vm-117-disk-1,if=none,id=drive-scsi2,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
please share the output ofvm not working. no bootable device
qm config <ID>
replacing <ID>
with the actual ID of your VM and the output of pveversion -v
. Make sure the boot device is selected in the Boot Order
setting in the Options
tab for the VM.root@pve8a1 ~ # qm config 117 | grep vm-117-disk-1
scsi2: zfs:vm-117-disk-1,discard=on,iothread=1,size=1G
root@pve8a1 ~ # qm showcmd 117 --pretty | grep vm-117-disk-1
-drive 'file=/dev/zvol/zfs/vm-117-disk-1,if=none,id=drive-scsi2,discard=on,format=raw,cache=none,aio=io_uring,detect-zeroes=unmap' \
[/CODE]
discard=on
should not be used on ZFS volume, causes data corruption.aio=io_uring
References?should not be used on ZFS volume, causes data corruption.
Never heard of that. If it would be a general issue, a lot of people would complain. Can you share any details?AFK this parameters:
discard=on
aio=io_uring
should not be used on ZFS volume, causes data corruption.
Which PVE version uses this settings by default?
discard=on
is not a default, this is just an example I posted.please share the VM configurationI have the same problem. All my new installed VMs can't boot, showing that "not a bootable disk" and "no bootable device"
qm config <ID>
replacing <ID>
with the ID of your VM and output of pveversion -v
as well as the output of the following (if it's qcow2, it needs to be different commands!):fdisk -l /path/to/your/device
lsblk -o NAME,FSTYPE /path/to/your/device
wipefs /path/to/your/device
pvesm path storage:vm-XYZ-disk-N
using your actual volume ID to get the path.still having the issue. so finally decided to reset everything and go for a fresh install.
the backups on the fresh install are all faillig the same way.
also, creating a new cloud init vm on the new system fail on the same way.
I found that creating a vm from a .img was failing. but, if i rename the img to qcow2 the same process works? any idea?
discard=off
aio=native
cache=none
scsihw=virtio-scsi-pci
I have solved the problem. I use theHi,
please share the VM configurationqm config <ID>
replacing<ID>
with the ID of your VM and output ofpveversion -v
as well as the output of the following (if it's qcow2, it needs to be different commands!):
UseCode:fdisk -l /path/to/your/device lsblk -o NAME,FSTYPE /path/to/your/device wipefs /path/to/your/device
pvesm path storage:vm-XYZ-disk-N
using your actual volume ID to get the path.
importdisk
in terminal instead of using the web UI "create a vm" directly. Although I don't understand what's the difference of them, and what happened.Here it can be seen that the device link actually points to the partitionCode:root@saori:~# lsblk -o NAME,FSTYPE /dev/zvol/FileZilla2/vm-9000-disk-0 NAME FSTYPE zd400p16 ext4
zd400p16
rather than zd400
.We use essential cookies to make this site work, and optional cookies to enhance your experience.