Ubuntu vm wont start when cloned to lvm iscsi shared storage, local storage ok

This is a Lenovo DS4200 that ain't that rare I guess,
it does not mean that it has no bugs. Since its not QA'ed with Proxmox - its possible they are not compatible out-of-the-box. This is why its important to pick storage where its actually guaranteed by vendor to work with your particular application.

When one does importdisk the following command is run:
/usr/bin/qemu-img convert -p -n -O raw /mnt/pve/bbnas/template/iso/lunar-server-cloudimg-amd64.img zeroinit:/dev/bbpve/blockbridge-nvme:vm-3011-disk-0

Looking at the code we can learn that
a) there is a "zeroinit" prefix that is not standard part of qemu https://github.com/search?q=org%3Aproxmox%20zeroinit&type=code
b) it is conditioned on presence of sparseinit storage feature: QemuServer/ImportDisk.pm: my $zeroinit = PVE::Storage::volume_has_feature($storecfg, 'sparseinit', $dst_volid);

We can further learn from the code that sparseinit is enabled by default in these cases:
Storage/Plugin.pm: sparseinit => {
...
base => { qcow2 => 1, raw => 1, vmdk => 1 },

You can try changing the settings on your LVM storage definition, I did not try it:
sparseinit 0



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
  • Like
Reactions: lost_avocado
Thank you very much for your efforts in this matter! Tried adding sparseinit feature to the LVMPlugin.pm, unfortunately that didn't give any effect.
What did make a difference was adding saferemoval to LVM in storage.cfg, then every deleting of disks are finished with zeroing out the diskspace.

It seems like new lvm disks are created in the same place as the last deleted disk was(not sure about this). So far all new cloned vm's works and that must be because the space have been zeroed out in advanced.
 
Last edited:
for LVM sparseinit is not set and thus zeroinit is not used. if manually zeroing out the blocks works, then that means the storage ignores the zeroing that qemu-img convert does. we've seen this in the past with other proprietary storage appliances. this is broken behaviour on their end.
 
you can probably verify this if you create a volume, then fill it with a pattern using dd, and then use qemu-img convert to transfer a sparse image onto it. if you see the "pattern" where there should be holes/zeroes, then you know it's the same bug.
 
you can probably verify this if you create a volume, then fill it with a pattern using dd, and then use qemu-img convert to transfer a sparse image onto it. if you see the "pattern" where there should be holes/zeroes, then you know it's the same bug.
yes, thats where I was going in #17 - if this was my environment, I'd try to repro with straight LVM/dd and predictable patterns, then remove LVM and open a case with the vendor.
There may be an option on the storage side, perhaps hidden in CLI, to disable zero optimization.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
FWIW, the broken one(s) in the past where not handling the following (issued by Qemu):

ioctl(blockdev_filedescriptor, BLKZEROOUT, offset, length)

in case it is helpful if you have a support contract with the vendor.
 
  • Like
Reactions: lost_avocado

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!