VM disk converted from Qcow2 to Raw during migration silently.

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
256
22
18
Hi,
could you share the configuration of the VM and the exact command you used for migration (or if done via GUI the settings), especially whether a target storage was specified? Could you also share the configuration for the source storage and (if distinct) target storage? And lastly, what was the old format of the disk?

EDIT: It's in the title, my bad.
 
Jul 18, 2018
8
0
1
15
Hi, all nodes use local storage only (hw raid, ext4, qcow2 images). Migration done by GUI - there is no much to change.

Here are the settings:
agent: 1
bootdisk: virtio0
cores: 2
ide2: none,media=cdrom
memory: 4032
name: dbstore
net0: e1000=02:FC:4F:18:C9:10,bridge=vmbr5
numa: 0
ostype: l26
smbios1: uuid=4569fe69-ea2b-4e6a-af4b-4f51f446753f
sockets: 1
virtio0: localhost:103/vm-103-disk-1.qcow2,size=32G
 

Attachments

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
256
22
18
I'm not able to reproduce here. Could you share the section for the storage localhost in your /etc/pve/storage.cfg? What happens if you run
Code:
qm migrate <VMID> <NODENAME> --targetstorage localhost --online --with-local-disks
and
Code:
qm migrate <VMID> <NODENAME>  --online --with-local-disks
respectively, for a running VM with a qcow2 disk? (Maybe create a new VM with a small disk an no OS if you don't want to wait around long, but please make sure it's running as otherwise it'll do an offline migration.)
 
Jul 18, 2018
8
0
1
15
It's strange. After I've converted raw to qcow2 manually (because of previous migration) the new migration failed (gui and command line one).
See attachment - looks like it expect raw image.

HI here is part of storage.cfg (there are also network disks, but used for backup files only):

dir: local
disable
path /var/lib/vz
content rootdir
maxfiles 0
shared 0

dir: localhost
path /mnt/pve/localhost
content backup,images
maxfiles 4
shared 0
 

Attachments

Fabian_E

Proxmox Staff Member
Staff member
Aug 1, 2019
256
22
18
The conversion was the normal behavior before qemu-server 6.1-5.
Did the qcow2->raw conversion happen this time as well? If not, what were the source and target nodes when it happened?

From the migration log you posted, it seems that the source node for that migration (i.e. pve3) is running an older version (qemu-server older than 6.1-4). There was a change to prefix the start failed with the name of the target node (and also print the output of the qemu command), but that's missing in your log.

So it seems that for some reason there is still old code running on some of your nodes. Could you (re-)check with pveversion -v on all nodes if the packages are up to date, especially on pve3?
 
Jul 18, 2018
8
0
1
15
You right the other nodes are still on 6.1-3 - that's why it happens. The one I'm using for gui management is running 6.1-7.

I'll upgrade all to the latest version, and see if any good (probably will).

Thanks for you reply.
Petr
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!