[SOLVED] Cloning a running VM make the clone corrupted

jan.svoboda

Member
Jun 11, 2020
32
1
13
Hi,

when I clone a running VM, the clone has corrupted files.
It happens even if I clone a certain snapshot of the running VM (not the current state).

The cloned VM is Linux. QEMU agent is enabled and running on the cloned VM. It uses qcow2 disk format with no cache. Disk controller is VirtIO SCSI.
Storage for VMs uses ZFS filesystem and it is connected via GlusterFS to Proxmox nodes which are in a cluster.
Version of Proxmox is 6.2

The corrupted files contain only bytes of value 0. The VM then behaves strangely - depends on which files are corrupted, e.g. binaries.

The log from the cloning is weird - the cloning continues some time after disks are copied:
Code:
transferred: 41154510 bytes remaining: 788530 bytes total: 41943040 bytes progression: 98.12 %
transferred: 41578135 bytes remaining: 364905 bytes total: 41943040 bytes progression: 99.13 %
transferred: 41943040 bytes remaining: 0 bytes total: 41943040 bytes progression: 100.00 %
transferred: 41943040 bytes remaining: 0 bytes total: 41943040 bytes progression: 100.00 %
transferred: 41943040 bytes remaining: 0 bytes total: 41943040 bytes progression: 100.00 %
transferred: 41943040 bytes remaining: 0 bytes total: 41943040 bytes progression: 100.00 %
transferred: 41943040 bytes remaining: 0 bytes total: 41943040 bytes progression: 100.00 %

Should be cloning a running VM on such a storage setup correctly working?
Thank you.
 
Last edited:
Hi,
could you please share your /etc/pve/storage.cfg and the VM configuration qm config <ID>?
 
/etc/pve/storage.cfg
Code:
dir: local
    path /var/lib/vz
    content backup,iso,vztmpl
    maxfiles 1
    shared 0

lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir

glusterfs: Kamzik4
    path /mnt/pve/Kamzik4
    volume Kamzik4
    content images,iso
    server 192.168.55.97

glusterfs: Kamzik5
    path /mnt/pve/Kamzik5
    volume Kamzik5
    content images,iso
    server 192.168.55.97

qm config 119
Code:
root@devel5:~# qm config 119
agent: 1
bootdisk: scsi0
cores: 4
ide2: none,media=cdrom
machine: q35
memory: 4096
name: test1
net0: virtio=42:EE:9A:0C:C5:03,bridge=vmbr1,firewall=1,tag=50
net1: virtio=CE:98:EC:5F:34:A5,bridge=vmbr1,firewall=1
net2: virtio=62:C7:79:79:69:EA,bridge=vmbr2,firewall=1
net3: virtio=AA:A0:12:7D:DF:BF,bridge=vmbr2,firewall=1
net4: virtio=BA:FB:F2:A0:1F:FB,bridge=vmbr2,firewall=1
net5: virtio=4E:22:A8:E5:1E:BA,bridge=vmbr2,firewall=1
numa: 0
ostype: l26
scsi0: Kamzik5:119/vm-119-disk-1.qcow2,size=27G
scsi1: Kamzik5:119/vm-119-disk-0.qcow2,size=40G
scsihw: virtio-scsi-pci
smbios1: uuid=b04a7ec9-ebd3-4e19-93ff-3762b4ccba6e
sockets: 1
vmgenid: da473abc-b273-44bb-8693-fb138a66c484
 
Do you clone from one gluster storage to the other? The glusterfs integration of Qemu has known problems in that case. Although the bug report I linked is about qemu-img, and live cloning uses a different mechanism under the hood (NBD drive mirroring), it might very well have the same root cause.
 
Yes, i do clone from one GlusterFS storage to the other.

Hopefully cloning on the same storage where the template will be enough until qemu-img is fixed.
Thank you.
 
Hmm, unfortunately clones are sometimes corrupted even if they are created on the same GlusterFS storage where the template is, I just tried.
 
With "template", do you mean the running VM your clones are based on? I'm asking, because in PVE you can convert to template, but then that machine cannot be started anymore (and its volumes get flagged as immutable if the storage supports it).

Does cloning a stopped VM on the same glusterfs storage work at least? Or if that's what you already tested, does cloning a running VM on the same glusterfs storage work?

Could you also post your glusterfs-server version and the output of pveversion -v?
 
I clone VMs from both - templates (those which cannot be started) and VMs.
Cloning a stopped VM on the same GlusterFS works, but in the last week, 1 clone was corrupted.

Version of GlusterFS on Proxmox nodes (client) is 8.1. Version of GlusterFS on GlusterFS node (server) is 7.4 (CentOS 7).
Version of ZFS on the storages is 0.8.3, on Proxmox nodes 0.8.4.
Maybe the issue is caused by different versions of GFS or ZFS.

pveversion -v
Code:
root@devel1:~# pveversion -v
proxmox-ve: 6.2-2 (running kernel: 5.4.65-1-pve)
pve-manager: 6.2-12 (running version: 6.2-12/b287dd27)
pve-kernel-5.4: 6.2-7
pve-kernel-helper: 6.2-7
pve-kernel-5.3: 6.1-6
pve-kernel-5.4.65-1-pve: 5.4.65-1
pve-kernel-5.4.44-2-pve: 5.4.44-2
pve-kernel-5.4.44-1-pve: 5.4.44-1
pve-kernel-5.3.18-3-pve: 5.3.18-3
pve-kernel-5.3.18-2-pve: 5.3.18-2
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.4-pve1
criu: 3.11-3
glusterfs-client: 8.1-1
ifupdown: residual config
ifupdown2: 3.0.0-1+pve3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.16-pve1
libproxmox-acme-perl: 1.0.5
libpve-access-control: 6.1-3
libpve-apiclient-perl: 3.0-3
libpve-common-perl: 6.2-2
libpve-guest-common-perl: 3.1-3
libpve-http-server-perl: 3.0-6
libpve-storage-perl: 6.2-8
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.3-1
lxcfs: 4.0.3-pve3
novnc-pve: 1.1.0-1
openvswitch-switch: 2.12.0-1
proxmox-backup-client: 0.9.0-2
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.3-1
pve-cluster: 6.2-1
pve-container: 3.2-2
pve-docs: 6.2-6
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-3
pve-firmware: 3.1-3
pve-ha-manager: 3.1-1
pve-i18n: 2.2-1
pve-qemu-kvm: 5.1.0-3
pve-xtermjs: 4.7.0-2
qemu-server: 6.2-15
smartmontools: 7.1-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 0.8.4-pve2
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!