Cloning a template creates wrong volume

burgosz

Member
Nov 22, 2021
4
0
6
33
Hi,

We are using iscsi storage.

After full cloning the template the disk of the VM contains invalid partition table.

Code:
root@l-rs4:~# fdisk -l /dev/mapper/istore-vm--10000--disk--0
Disk /dev/mapper/istore-vm--10000--disk--0: 2.2 GiB, 2361393152 bytes, 4612096 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disklabel type: gpt
Disk identifier: BA057571-25C1-47A1-A98A-DBF197729E90

Device                                        Start     End Sectors  Size Type
/dev/mapper/istore-vm--10000--disk--0-part1  227328 4612062 4384735  2.1G Linux filesystem
/dev/mapper/istore-vm--10000--disk--0-part14   2048   10239    8192    4M BIOS boot
/dev/mapper/istore-vm--10000--disk--0-part15  10240  227327  217088  106M EFI System

Partition table entries are not in disk order.

root@l-rs4:~# fdisk -l /dev/mapper/istore-vm--220--disk--0
Disk /dev/mapper/istore-vm--220--disk--0: 2.2 GiB, 2361393152 bytes, 4612096 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device                                    Boot Start     End Sectors  Size Id Type
/dev/mapper/istore-vm--220--disk--0-part1          1 4612095 4612095  2.2G ee GPT

Partition 1 does not start on physical sector boundary.

The 10000 is the template, the 220 is the VM created by cloning. I'm cloning to the same storage on the same node.

The template was created using this tutorial: https://pve.proxmox.com/wiki/Cloud-Init_Support, I've tried with both Ubuntu 22.04 and Rocky Linux 9.2 cloud images and I experience the same problem with both.

Do you have any advice what can be wrong?

Thanks, Szabolcs
 
Last edited:
can you provide more details?

Define what "iscsi storage" you are using exactly.
Output of "cat /etc/pve/storage.cfg"
Output of "qm config 10000"
How the disks for target VM get provisioned?
Full log of "qm clone 10000 220 --full 1"
"journalctl --since '10 minutes ago'" output (if the clone time was within 10 minutes)
Output of "qm config 220" (or new VM ID created for the purposes of this post)
"pveversion -v"


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi,

Thanks for the clarification request. Here are some details.

It is a HP MSA 2050 SAN storage. The iscsi target is managed by open-iscsi directly on the host and not from PVE.

Some relevant config about this storage:

Code:
multipath -ll:

l-store-s (3600c0ff000524808dd10ff5f01000000) dm-7 HPE,MSA 2050 SAN
size=35T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 15:0:0:0 sdd     8:48  active ready running
|-+- policy='round-robin 0' prio=50 status=enabled
| `- 17:0:0:0 sdc     8:32  active ready running
|-+- policy='round-robin 0' prio=10 status=enabled
| `- 14:0:0:0 sdb     8:16  active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  `- 16:0:0:0 sda     8:0   active ready running
 
pvs:

PV                         VG        Fmt  Attr PSize   PFree
/dev/mapper/l-store-s      istore    lvm2 a--   34.65t  8.15t

cat /etc/pve/storage.cfg:

lvm: l-store-s
    vgname istore
    content rootdir,images
    nodes l-rs3,l-rs7,l-rs4,l-rs6,l-rs5,shedir
    shared 1

QM Config of the template:

Code:
boot: order=scsi0
ipconfig0: ip=dhcp
memory: 2048
meta: creation-qemu=7.2.0,ctime=1686244088
name: ubuntu22.04-cloud-init
net0: virtio=DE:F7:35:94:5B:E6,bridge=vmbr0,tag=23
onboot: 1
scsi0: l-store-s:vm-10000-disk-0,size=2252M
scsi1: l-store-s:vm-10000-cloudinit,media=cdrom
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=90768b46-9e4d-4df7-aeac-2dc0006612d2
template: 1
vga: serial0
vmgenid: b85e95ee-e674-4851-93e9-f5591f62f02d

The disk was created with:
qm set 10000 --scsi0 l-store-s:0,import-from=/root/jammy-server-cloudimg-amd64.img and the img is a freshly downloaded cloud-image from Ubuntu (https://cloud-images.ubuntu.com/jammy/current/)

Output of qm clone 10000 220 --full 1:
Code:
create full clone of drive scsi0 (l-store-s:vm-10000-disk-0)
  Wiping PMBR signature on /dev/istore/vm-220-disk-0.
  Logical volume "vm-220-disk-0" created.
transferred 0.0 B of 2.2 GiB (0.00%)
transferred 24.1 MiB of 2.2 GiB (1.07%)
transferred 48.0 MiB of 2.2 GiB (2.13%)
transferred 72.1 MiB of 2.2 GiB (3.20%)
transferred 95.9 MiB of 2.2 GiB (4.26%)
transferred 120.0 MiB of 2.2 GiB (5.33%)
transferred 143.9 MiB of 2.2 GiB (6.39%)
transferred 168.0 MiB of 2.2 GiB (7.46%)
transferred 192.1 MiB of 2.2 GiB (8.53%)
transferred 216.0 MiB of 2.2 GiB (9.59%)
transferred 240.1 MiB of 2.2 GiB (10.66%)
transferred 263.9 MiB of 2.2 GiB (11.72%)
transferred 288.0 MiB of 2.2 GiB (12.79%)
transferred 311.9 MiB of 2.2 GiB (13.85%)
transferred 336.0 MiB of 2.2 GiB (14.92%)
transferred 360.1 MiB of 2.2 GiB (15.99%)
transferred 384.0 MiB of 2.2 GiB (17.05%)
transferred 408.1 MiB of 2.2 GiB (18.12%)
transferred 431.9 MiB of 2.2 GiB (19.18%)
transferred 456.0 MiB of 2.2 GiB (20.25%)
transferred 479.9 MiB of 2.2 GiB (21.31%)
transferred 504.0 MiB of 2.2 GiB (22.38%)
transferred 528.1 MiB of 2.2 GiB (23.45%)
transferred 552.0 MiB of 2.2 GiB (24.51%)
transferred 576.1 MiB of 2.2 GiB (25.58%)
transferred 599.9 MiB of 2.2 GiB (26.64%)
transferred 624.0 MiB of 2.2 GiB (27.71%)
transferred 647.9 MiB of 2.2 GiB (28.77%)
transferred 672.0 MiB of 2.2 GiB (29.84%)
transferred 696.1 MiB of 2.2 GiB (30.91%)
transferred 720.0 MiB of 2.2 GiB (31.97%)
transferred 744.1 MiB of 2.2 GiB (33.04%)
transferred 767.9 MiB of 2.2 GiB (34.10%)
transferred 792.0 MiB of 2.2 GiB (35.17%)
transferred 815.9 MiB of 2.2 GiB (36.23%)
transferred 840.0 MiB of 2.2 GiB (37.30%)
transferred 864.1 MiB of 2.2 GiB (38.37%)
transferred 888.0 MiB of 2.2 GiB (39.43%)
transferred 912.1 MiB of 2.2 GiB (40.50%)
transferred 935.9 MiB of 2.2 GiB (41.56%)
transferred 960.0 MiB of 2.2 GiB (42.63%)
transferred 983.9 MiB of 2.2 GiB (43.69%)
transferred 1008.0 MiB of 2.2 GiB (44.76%)
transferred 1.0 GiB of 2.2 GiB (45.83%)
transferred 1.0 GiB of 2.2 GiB (46.89%)
transferred 1.1 GiB of 2.2 GiB (47.96%)
transferred 1.1 GiB of 2.2 GiB (49.02%)
transferred 1.1 GiB of 2.2 GiB (50.09%)
transferred 1.1 GiB of 2.2 GiB (51.15%)
transferred 1.1 GiB of 2.2 GiB (52.22%)
transferred 1.2 GiB of 2.2 GiB (53.29%)
transferred 1.2 GiB of 2.2 GiB (54.35%)
transferred 1.2 GiB of 2.2 GiB (55.42%)
transferred 1.2 GiB of 2.2 GiB (56.48%)
transferred 1.3 GiB of 2.2 GiB (57.55%)
transferred 1.3 GiB of 2.2 GiB (58.61%)
transferred 1.3 GiB of 2.2 GiB (59.68%)
transferred 1.3 GiB of 2.2 GiB (60.75%)
transferred 1.4 GiB of 2.2 GiB (61.81%)
transferred 1.4 GiB of 2.2 GiB (62.88%)
transferred 1.4 GiB of 2.2 GiB (63.94%)
transferred 1.4 GiB of 2.2 GiB (65.01%)
transferred 1.5 GiB of 2.2 GiB (66.07%)
transferred 1.5 GiB of 2.2 GiB (67.14%)
transferred 1.5 GiB of 2.2 GiB (68.21%)
transferred 1.5 GiB of 2.2 GiB (69.27%)
transferred 1.5 GiB of 2.2 GiB (70.34%)
transferred 1.6 GiB of 2.2 GiB (71.40%)
transferred 1.6 GiB of 2.2 GiB (72.47%)
transferred 1.6 GiB of 2.2 GiB (73.53%)
transferred 1.6 GiB of 2.2 GiB (74.60%)
transferred 1.7 GiB of 2.2 GiB (75.67%)
transferred 1.7 GiB of 2.2 GiB (76.73%)
transferred 1.7 GiB of 2.2 GiB (77.80%)
transferred 1.7 GiB of 2.2 GiB (78.86%)
transferred 1.8 GiB of 2.2 GiB (79.93%)
transferred 1.8 GiB of 2.2 GiB (80.99%)
transferred 1.8 GiB of 2.2 GiB (82.06%)
transferred 1.8 GiB of 2.2 GiB (83.13%)
transferred 1.9 GiB of 2.2 GiB (84.19%)
transferred 1.9 GiB of 2.2 GiB (85.26%)
transferred 1.9 GiB of 2.2 GiB (86.32%)
transferred 1.9 GiB of 2.2 GiB (87.39%)
transferred 1.9 GiB of 2.2 GiB (88.45%)
transferred 2.0 GiB of 2.2 GiB (89.52%)
transferred 2.0 GiB of 2.2 GiB (90.59%)
transferred 2.0 GiB of 2.2 GiB (91.65%)
transferred 2.0 GiB of 2.2 GiB (92.72%)
transferred 2.1 GiB of 2.2 GiB (93.78%)
transferred 2.1 GiB of 2.2 GiB (94.85%)
transferred 2.1 GiB of 2.2 GiB (95.91%)
transferred 2.1 GiB of 2.2 GiB (96.98%)
transferred 2.2 GiB of 2.2 GiB (98.05%)
transferred 2.2 GiB of 2.2 GiB (99.11%)
transferred 2.2 GiB of 2.2 GiB (100.00%)
transferred 2.2 GiB of 2.2 GiB (100.00%)
create full clone of drive scsi1 (l-store-s:vm-10000-cloudinit)
  Logical volume "vm-220-cloudinit" created.

Output of qm config 220
Code:
boot: order=scsi0
ipconfig0: ip=dhcp
memory: 2048
meta: creation-qemu=7.2.0,ctime=1686244088
name: Copy-of-VM-ubuntu22.04-cloud-init
net0: virtio=82:27:69:2C:A6:D4,bridge=vmbr0,tag=23
onboot: 1
scsi0: l-store-s:vm-220-disk-0,size=2252M
scsi1: l-store-s:vm-220-cloudinit,media=cdrom,size=4M
scsihw: virtio-scsi-pci
serial0: socket
smbios1: uuid=1796e55b-d21c-4134-becb-bad6b6457981
vga: serial0
vmgenid: 867d9aae-636b-4257-a95e-78dcfdff8dbc

Journalctl output:

Code:
Jun 08 20:40:51 l-rs4 qm[39176]: <root@pam> starting task UPID:l-rs4:0000990A:648C7E77:648220B3:qmclone:10000:root@pam:
Jun 08 20:40:52 l-rs4 multipath[39223]: l-store-s: adding new path sdd
Jun 08 20:40:52 l-rs4 multipath[39223]: l-store-s: adding new path sdc
Jun 08 20:40:52 l-rs4 multipath[39223]: l-store-s: adding new path sdb
Jun 08 20:40:52 l-rs4 multipath[39223]: l-store-s: adding new path sda
Jun 08 20:40:56 l-rs4 pveproxy[33509]: Clearing outdated entries from certificate cache
Jun 08 20:41:00 l-rs4 multipath[39583]: l-store-s: adding new path sdd
Jun 08 20:41:00 l-rs4 multipath[39583]: l-store-s: adding new path sdc
Jun 08 20:41:00 l-rs4 multipath[39583]: l-store-s: adding new path sdb
Jun 08 20:41:00 l-rs4 multipath[39583]: l-store-s: adding new path sda
Jun 08 20:41:00 l-rs4 qm[39176]: <root@pam> end task UPID:l-rs4:0000990A:648C7E77:648220B3:qmclone:10000:root@pam: OK
Jun 08 20:44:35 l-rs4 pmxcfs[3132845]: [dcdb] notice: data verification successful

pveversion -v:

Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.74-1-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)
pve-kernel-5.15: 7.4-1
pve-kernel-5.13: 7.1-9
pve-kernel-5.15.104-1-pve: 5.15.104-2
pve-kernel-5.15.83-1-pve: 5.15.83-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
pve-kernel-5.13.19-6-pve: 5.13.19-15
pve-kernel-5.4.162-1-pve: 5.4.162-2
ceph-fuse: 14.2.21-1
corosync: 3.1.7-pve1
criu: 3.15-1+pve-1
glusterfs-client: 9.2-1
ifupdown: residual config
ifupdown2: 3.1.0-1+pmx3
libjs-extjs: 7.0.0-1
libknet1: 1.24-pve2
libproxmox-acme-perl: 1.4.4
libproxmox-backup-qemu0: 1.3.1-1
libproxmox-rs-perl: 0.2.1
libpve-access-control: 7.4-2
libpve-apiclient-perl: 3.2-1
libpve-common-perl: 7.3-4
libpve-guest-common-perl: 4.2-4
libpve-http-server-perl: 4.2-3
libpve-rs-perl: 0.7.5
libpve-storage-perl: 7.4-2
libspice-server1: 0.14.3-2.1
lvm2: 2.03.11-2.1
lxc-pve: 5.0.2-2
lxcfs: 5.0.3-pve1
novnc-pve: 1.4.0-1
openvswitch-switch: 2.15.0+ds1-2+deb11u1
proxmox-backup-client: 2.4.1-1
proxmox-backup-file-restore: 2.4.1-1
proxmox-kernel-helper: 7.4-1
proxmox-mail-forward: 0.1.1-1
proxmox-mini-journalreader: 1.3-1
proxmox-offline-mirror-helper: 0.5.1-1
proxmox-widget-toolkit: 3.6.5
pve-cluster: 7.3-3
pve-container: 4.4-3
pve-docs: 7.4-2
pve-edk2-firmware: 3.20230228-2
pve-firewall: 4.3-1
pve-firmware: 3.6-4
pve-ha-manager: 3.6.0
pve-i18n: 2.12-1
pve-qemu-kvm: 7.2.0-8
pve-xtermjs: 4.16.0-1
qemu-server: 7.4-3
smartmontools: 7.2-pve3
spiceterm: 3.2-2
swtpm: 0.8.0~bpo11+3
vncterm: 1.7-1
zfsutils-linux: 2.1.9-pve1

Thanks.
 
Ok, so iscsi portion is irrelevant, unless the underlying storage is caching in some weird way.

The procedure you described works absolutely fine for me. I would recommend that you try it again, making sure you cleaned up/deleted all the prior attempts. Take a look at system state (including fdisk output at various points). Perhaps even reboot your system to clear any invalid state.

Its also interesting that partitions you listed from your source disk appear to not be from standard Ubuntu cloud image. You did imply that it was a custom image, make sure you give a standard image a try.

Create new disk on Blockbridge storage:
Code:
root@pve7demo1:~# bb vss provision -c 1TiB --label lvmtest --with-disk
== Created vss: lvmtest (VSS1862194C40601733)

== VSS: lvmtest (VSS1862194C40601733)
label                 lvmtest
serial                VSS1862194C40601733
uuid                  1eda5589-129a-48a2-b7b9-f8db24e3c2be
created               2023-06-08 15:00:44 -0400
status                online
current time          2023-06-08T19:00+00:00

Attach disk to PVE host:
Code:
root@pve7demo1:~# bb host attach -d lvmtest/disk-1
=============================================================
lvmtest/disk-1 attached (read-write) to pve7demo1 as /dev/sdc
=============================================================

create a partition:
Code:
root@pve7demo1:~# fdisk /dev/sdc

Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x15c84e94.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-2147483647, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-2147483647, default 2147483647):

Created a new partition 1 of type 'Linux' and of size 1024 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

LVM volume group create:
Code:
root@pve7demo1:~# pvcreate /dev/sdc1
  Physical volume "/dev/sdc1" successfully created.
root@pve7demo1:~# vgcreate istore /dev/sdc1
  Volume group "istore" successfully created

update PVE storage config file and restart services:
Code:
root@pve7demo1:~# vi /etc/pve/storage.cfg
root@pve7demo1:~#  systemctl try-reload-or-restart pvedaemon pveproxy pvestatd pvescheduler
root@pve7demo1:~# pvesm status
Name             Type     Status           Total            Used       Available        %
directlvm         lvm     active      1073737728               0      1073737728    0.00%

create VM and add imported disk, clone vm:
Code:
root@pve7demo1:~# qm create 3001
root@pve7demo1:~# qm set 3001 --scsi0 directlvm:0,import-from=/mnt/pve/bbnas/template/iso/jammy-server-cloudimg-amd64-disk-kvm.qcow2
update VM 3001: -scsi0 directlvm:0,import-from=/mnt/pve/bbnas/template/iso/jammy-server-cloudimg-amd64-disk-kvm.qcow2
  Logical volume "vm-3001-disk-0" created.
transferred 0.0 B of 2.2 GiB (0.00%)

root@pve7demo1:~# qm clone 3001 3002 --full 1
create full clone of drive scsi0 (directlvm:vm-3001-disk-0)
  Logical volume "vm-3002-disk-0" created.
transferred 0.0 B of 2.2 GiB (0.00%)

partition tables are identical:
Code:
root@pve7demo1:~# fdisk -l /dev/mapper/istore-vm--300
/dev/mapper/istore-vm--3001--disk--0  /dev/mapper/istore-vm--3002--disk--0 
root@pve7demo1:~# fdisk -l /dev/mapper/istore-vm--3001--disk--0
Disk /dev/mapper/istore-vm--3001--disk--0: 2.2 GiB, 2361393152 bytes, 4612096 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: 1DEF787F-62F5-471E-9161-C0BE43F5FB91

Device                                       Start     End Sectors  Size Type
/dev/mapper/istore-vm--3001--disk--0-part1  227328 4612062 4384735  2.1G Linux filesystem
/dev/mapper/istore-vm--3001--disk--0-part14   2048   10239    8192    4M BIOS boot
/dev/mapper/istore-vm--3001--disk--0-part15  10240  227327  217088  106M EFI System

Partition table entries are not in disk order.

root@pve7demo1:~# fdisk -l /dev/mapper/istore-vm--3002--disk--0
Disk /dev/mapper/istore-vm--3002--disk--0: 2.2 GiB, 2361393152 bytes, 4612096 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: 1DEF787F-62F5-471E-9161-C0BE43F5FB91

Device                                       Start     End Sectors  Size Type
/dev/mapper/istore-vm--3002--disk--0-part1  227328 4612062 4384735  2.1G Linux filesystem
/dev/mapper/istore-vm--3002--disk--0-part14   2048   10239    8192    4M BIOS boot
/dev/mapper/istore-vm--3002--disk--0-part15  10240  227327  217088  106M EFI System

Partition table entries are not in disk order.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Sorry, for the wrong partition layout. Between my initial and the 2nd post I tried it with other images as well. In the initial it was a Rocky cloud-image. Now my layouts of the template looks like yours (its the offical cloud image from Ubuntu without any modification), but the clone is still bugged. I've updated the initial post to avoid confusion.

I've tried with completely different random id-s, so the lv names are not reused and tried from different nodes also from the cluster (but on the same storage) and I experience the same problem.
 
Last edited:
Does the layout persist through reboot?
Can you try with local-lvm storage?
The clone underlying command is: /usr/bin/qemu-img convert -p -n -f raw -O raw /dev/istore/vm-3003-disk-0 /dev/istore/vm-3004-disk-0
You can try this manually.

Also looking at your packages seem to be a little out of date. My fully updated system on the right, yours on the left. I dont know if that affects you, but definitely makes sense to start from a clean starting point.

Code:
proxmox-ve: 7.4-1 (running kernel: 5.15.74-1-pve)             | proxmox-ve: 7.4-1 (running kernel: 5.15.107-2-pve)
pve-manager: 7.4-3 (running version: 7.4-3/9002ab8a)          | pve-manager: 7.4-4 (running version: 7.4-4/4a8501a8)
pve-kernel-5.15: 7.4-1                                        | pve-kernel-5.15: 7.4-3
                                                              > pve-kernel-5.11: 7.0-10
                                                              > pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.83-1-pve: 5.15.83-1                           <
pve-kernel-5.15.74-1-pve: 5.15.74-1                           <
pve-kernel-5.4.162-1-pve: 5.4.162-2                           | pve-kernel-5.13.19-2-pve: 5.13.19-4
ceph-fuse: 14.2.21-1                                          | pve-kernel-5.11.22-7-pve: 5.11.22-12
                                                              > pve-kernel-5.11.22-1-pve: 5.11.22-2
                                                              > ceph: 17.2.6-pve1
                                                              > ceph-fuse: 17.2.6-pve1
ifupdown: residual config                                     | ifupdown2: 3.1.0-1+pmx4
ifupdown2: 3.1.0-1+pmx3                                       | ksm-control-daemon: 1.4-1
libpve-access-control: 7.4-2                                  | libpve-access-control: 7.4-3
libpve-common-perl: 7.3-4                                     | libpve-common-perl: 7.4-1
libpve-rs-perl: 0.7.5                                         | libpve-rs-perl: 0.7.6
libpve-storage-perl: 7.4-2                                    | libpve-storage-perl: 7.4-3
openvswitch-switch: 2.15.0+ds1-2+deb11u1                      | proxmox-backup-client: 2.4.2-1
proxmox-backup-client: 2.4.1-1                                | proxmox-backup-file-restore: 2.4.2-1
proxmox-backup-file-restore: 2.4.1-1                          <
proxmox-widget-toolkit: 3.6.5                                 | proxmox-widget-toolkit: 3.7.0
pve-container: 4.4-3                                          | pve-container: 4.4-4
pve-firewall: 4.3-1                                           | pve-firewall: 4.3-2
pve-firmware: 3.6-4                                           | pve-firmware: 3.6-5
pve-ha-manager: 3.6.0                                         | pve-ha-manager: 3.6.1
pve-xtermjs: 4.16.0-1                                         | pve-xtermjs: 4.16.0-2
zfsutils-linux: 2.1.9-pve1                                    | zfsutils-linux: 2.1.11-pve1



Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
I've upgraded, but it didn't help.

I've played with qemu-img convert and it works on converting into a local file, and into local lvm also.

But I've found a interesting bug and a hotfix for my problem. If I create an LV a little bit bigger than the original image the convert works well and keeps the partitions:
Code:
root@l-rs4:~# fdisk -l /dev/mapper/istore-vm--10000--disk--0
GPT PMBR size mismatch (4612095 != 6709247) will be corrected by write.
The backup GPT table is not on the end of the device.
Disk /dev/mapper/istore-vm--10000--disk--0: 3.2 GiB, 3435134976 bytes, 6709248 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disklabel type: gpt
Disk identifier: BA057571-25C1-47A1-A98A-DBF197729E90

Device                                        Start     End Sectors  Size Type
/dev/mapper/istore-vm--10000--disk--0-part1  227328 4612062 4384735  2.1G Linux filesystem
/dev/mapper/istore-vm--10000--disk--0-part14   2048   10239    8192    4M BIOS boot
/dev/mapper/istore-vm--10000--disk--0-part15  10240  227327  217088  106M EFI System

Partition table entries are not in disk order.

root@l-rs4:~# fdisk -l /dev/mapper/istore-vm--220--disk--0
GPT PMBR size mismatch (4612095 != 6709247) will be corrected by write.
The backup GPT table is not on the end of the device.
Disk /dev/mapper/istore-vm--220--disk--0: 3.2 GiB, 3435134976 bytes, 6709248 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 1048576 bytes
Disklabel type: gpt
Disk identifier: BA057571-25C1-47A1-A98A-DBF197729E90

Device                                        Start     End Sectors  Size Type
/dev/mapper/istore-vm--220--disk--0-part1  227328 4612062 4384735  2.1G Linux filesystem
/dev/mapper/istore-vm--220--disk--0-part14   2048   10239    8192    4M BIOS boot
/dev/mapper/istore-vm--220--disk--0-part15  10240  227327  217088  106M EFI System

Partition table entries are not in disk order.
So after importing the template I resized the volume. This fixed the cloning.

Do you have any clue what can cause this? Maybe is there a problem with my storage?
 
could be disk block size emulation across the storage layers you employ. I.e from how you built your raid group on MSA, to LUN presentation, to Kernel/LVM.

https://superuser.com/questions/1352065/gpt-pmbr-size-mismatch-will-be-corrected-by-write

root@l-rs4:~# fdisk -l /dev/mapper/istore-vm--10000--disk--0
GPT PMBR size mismatch (4612095 != 6709247) will be corrected by write.
The backup GPT table is not on the end of the device.
Disk /dev/mapper/istore-vm--10000--disk--0: 3.2 GiB, 3435134976 bytes, 6709248 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
compare that to my system:
Code:
fdisk -l /dev/mapper/istore-vm--3003--disk--0
Disk /dev/mapper/istore-vm--3003--disk--0: 2.2 GiB, 2361393152 bytes, 4612096 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes

based on what you found with volume size, it does seem to point to some sort of sector/size mismatch. This is MSA/LVM/Linux interaction rather than Proxmox specific, so I would google for former.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!