Incorrectly migrations of VM's from oVirt to Proxmox

bthn.szk

New Member
Jun 20, 2023
5
1
3
Hello,

since the last few days I have been dealing with the topic of VM migrations. My idea is to migrate VM's hosted on a KVM oVirt Environment into Proxmox VE and I came across issues like missconfigurations via
Code:
qm importovf
. I already instructed myself with the migration steps based in the Proxmox guideline (https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE). The migration itself works actually pretty fine but however when I start up the migrated VM on Proxmox the results are not in the way how I expected it. After several troubleshootings and many unsuccessfull attempts I decided to look for help here.

Some of my issues:

1) The
Code:
qm importovf
utility does somehow not a correct importation from the .ovf file and afterwards I have to manually reconfigure and adjust some settings in the configs via Web UI or CLI. Any tips or ideas about that?
2) The only Network Interface/Device which works for me is the Intel E1000. When I try to SSH to the machine with that NIC I have significant delays and its practically unable to work on the CLI.
3) I also figured out that the disk mount during the boot section takes too long. We are here up to 1m30seconds. Probably the disk has issues or swap partitions are damaged? See picture below

1687263526855.png

These are my first attempts at migrations, so it may well be that certain issues are unknown to me. Any help or suggestion for a solution is welcome. I ask in advance for forgiveness as soon as spelling or typing errors are noticed. Thank you!

Best regards,
btn.szk
 
utility does somehow not a correct importation from the .ovf file and afterwards I have to manually reconfigure and adjust some settings in the configs via Web UI or CLI.
Please be more specific (used command and its output; what did you have to reconfigure manually).

Also interesting:
Code:
cat /etc/pve/qemu-server/<affected-vm-conf>
cat /etc/pve/storage.cfg  # only relevant lines

https://forum.proxmox.com/threads/qm-importovf-error.126797/post-554033
Maybe this works for you?
 
I assume that the resume kernel parameter is set to some disk which does not exist. From the screenshot it seems tha you are using some kind of Suse derivative. E.g. OpenSuse has this behaviour that they set the kernel resume parameter to a /dev/disk-by-id string, which can also be seen in the screenshot.

Example:
Code:
  resume=/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_beb722df-523f-4c37-ab32-11ce5c302e72-part3

It seems that the serial is not automatically migrated from the ovf file. There is a serial parameter for virtio-scsi devices for qm.conf
(see https://pve.proxmox.com/wiki/Manual:_qm.conf), so it can be set manually. It is present in the ovf file.

But there is another catch: ovirt uses uuids as their serial numbers, which are quite long. As of the time of writing, Proxmox imposes a limit of 20 chars of urlencoded data as a serial number:
Code:
    serial => {
    type => 'string',
    format => 'urlencoded',
    format_description => 'serial',
    maxLength => 20*3, # *3 since it's %xx url enoded
    description => "The drive's reported serial number, url-encoded, up to 20 bytes long.",
    optional => 1,
    },
see https://github.com/proxmox/qemu-ser...d935c6c38cdb2386b61f8/PVE/QemuServer/Drive.pm

The uuid of the disk is actually 37 characters, so Proxmox only sets the first 20 chars beb722df-523f-4c37-a, and not the full 37 chars of the original uuid beb722df-523f-4c37-ab32-11ce5c302e72.

The funny thing is, that the choice of 20 characters is well funded. I.e. it is the same limit as qemu:
Code:
    case VIRTIO_BLK_T_GET_ID:
    {
        /*
         * NB: per existing s/n string convention the string is
         * terminated by '\0' only when shorter than buffer.
         */
        const char *serial = s->conf.serial ? s->conf.serial : "";
        size_t size = MIN(strlen(serial) + 1,
                          MIN(iov_size(in_iov, in_num),
                              VIRTIO_BLK_ID_BYTES));
        iov_from_buf(in_iov, in_num, 0, serial, size);
        virtio_blk_req_complete(req, VIRTIO_BLK_S_OK);
        virtio_blk_free_request(req);
        break;
    }
see (https://github.com/qemu/qemu/blob/cab35c73be9d579db105ef73fa8a60728a890098/hw/block/virtio-blk.c)

Where VIRTIO_BLK_ID_BYTES is defined as 20 in https://github.com/qemu/qemu/blob/c...clude/standard-headers/linux/virtio_blk.h#L56

I have to do some reading how ovirt achieved to circumvent this limit, because from my ovirt machine the uuid is clearly set to the full 37 characters:
Code:
# lsblk -o SERIAL
SERIAL
beb722df-523f-4c37-ab32-11ce5c302e72
 
Last edited:
  • Like
Reactions: alex#
I did some further investigation and dumped the qemu command line which is generated by ovirt from virsh with

Code:
virsh # domxml-to-native qemu-argv --domain microfocus-netware-bridge
I spare you the whole output, but the ovirt storage uuid is mentioned in several locations, but most of them are basically references to the storage location on disk. I think the most interesting output is this part (btw. I attached the whole dump as file):
Code:
-device scsi-hd,bus=ua-6c30be2b-3c8e-4ad7-bd26-c8788bc8dde6.0,channel=0,scsi-id=0,lun=0,device_id=beb722df-523f-4c37-ab32-11ce5c302e72,drive=libvirt-1-format,id=ua-beb722df-523f-4c37-ab32-11ce5c302e72,bootindex=1,write-cache=on,serial=beb722df-523f-4c37-ab32-11ce5c302e72,werror=stop,rerror=stop

And you can see, that the serial is indeed set to the storage uuid serial=beb722df-523f-4c37-ab32-11ce5c302e72 (that's ovirt terminology).

I did a deep dive into the qemu source, and I believe this is the place where the above serial command is parsed into source:
Code:
    DEFINE_PROP_STRING("serial", VirtIOBlock, conf.serial),
(see https://github.com/qemu/qemu/blob/c...9d2b3cff03995dd5d/hw/block/virtio-blk.c#L1710)

So the string, at this point is complete. Refering to my post above, the serial string is only truncated for VIRTIO_BLK_T_GET_ID ?
But how does qemu provide the full serial to the guest linux?

At this point, I think, it would be interesting to do some experiment:
1. Modifiy the source in Drive.pm to accept e.g. 40 chars
2. Observe how the migrated machine behaves

edit:
The qemu version on my ovirt instance is
Code:
# /usr/libexec/qemu-kvm -version
QEMU emulator version 6.0.0 (qemu-kvm-6.0.0-33.el8s)
Copyright (c) 2003-2021 Fabrice Bellard and the QEMU Project developers
 

Attachments

  • virsh_dump.txt
    6.5 KB · Views: 4
Last edited:
So just to complete my investigation, I migrated my vm to proxmox and dumped the qemu cmd with qm showcmd 100.
You can enter the full serial into the vm.conf, but it seems that it's still getting truncated.

Code:
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100,serial=beb722df-523f-4c37-ab32-11ce5c302e72'

So from my point of view the device string is quite similar. One obvious thing is, that the device parameter is missing.

Another thing that I discovered is, that qemu only truncates the serial for virtio_blk devices to 20 characters.
As for the virtio_scsi devices, I believe this is the more appropriate code in qemu:

Code:
        l = strlen(s->serial);
        if (l > 36) {
            l = 36;
        }
(See scsi_disk.c https://github.com/qemu/qemu/blob/c...b3cff03995dd5d/hw/scsi/scsi-disk.c#LL649-L652)
 

Attachments

  • qm_dump.txt
    2 KB · Views: 1
Short addendum: It seems that the device_id parameter of the scsi disk is the key. I added device_id=beb722df-523f-4c37-ab32-11ce5c302e72to qm dumped the device string like so:

Code:
-device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,device_id=beb722df-523f-4c37-ab32-11ce5c302e72,drive=drive-scsi0,id=scsi0,bootindex=100,serial=beb722df-523f-4c37-ab32-11ce5c302e72'

Then started the vm manually by copy-pasting the modified qm showcmd dump to the shell.

@alex# Do you know how to add the device_id parameter via proxmox qm or any other means?
 
  • Like
Reactions: alex#

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!