How to monitor process of qcow2 import to ceph storage

Ayush

Member
Oct 27, 2023
81
2
8
Hi Team ,

I am trying to import a 1TB qcow2 image to a proxmox ceph storage. To do this operation I did following steps

1) I copied that 1TB qcow2 image to a zfs storage. HDD-2TB-MIR.
2) I created a new vm 102 on proxmox .
3) Then i ran following command to import image to 102 vm over ceph storage.

qm importdisk 102 166_centos7.qcow2 SSD-SPEED

But when I run this command i didn;t see any progress nor I am able to find the way to monitor it.


importing disk '166_centos7.qcow2' to VM 102 ...
transferred 0.0 B of 1000.0 GiB (0.00%)

I don't know whether I am doing it in correct way or not. Please help me to identify the issue.
 
Hello Proxmox Experts

After running this command there is progress show in it , neither it is showing any error .

qm importdisk 102 166_centos7.qcow2 SSD-SPEED


trying to acquire cfs lock 'storage-SSD-SPEED' ...
transferred 0.0 B of 1000.0 GiB (0.00%)

.............But it is not showing any process

SSD-SPEED is representing ceph storage. and 166_centos7.qcow2 is copied on zfs pool.
 
Hi Team
If I ran following command

qm importdisk 102 166_centos7.qcow2 SSD-SPEED


trying to acquire cfs lock 'storage-SSD-SPEED' ...
transferred 0.0 B of 1000.0 GiB (0.00%)

it should start for creating logical volume , but it is not able to create logical volume . Can any one guide us what's wrong with this command.
 
Hi Team
If I ran following command

qm importdisk 102 166_centos7.qcow2 SSD-SPEED


trying to acquire cfs lock 'storage-SSD-SPEED' ...
transferred 0.0 B of 1000.0 GiB (0.00%)

it should start for creating logical volume , but it is not able to create logical volume . Can any one guide us what's wrong with this command.
Is your CEPH remote or installed with PVE? Is it green/ok?
 
Maybe you need to set --format raw as an additional Parameter.

Another way would be to write the disk in its current storage directly into the VM Config and then just trigger the move using the GUI. Then you have it as a task in the background.
 
Is your CEPH remote or installed with PVE? Is it green/ok?
Ceph is installed on PVE and its health is OK .

ceph -s
cluster:
id: 47061c54-d430-47c6-afa6-952da8e88877
health: HEALTH_OK

services:
mon: 3 daemons, quorum ie172,ie171,e173 (age 7w)
mgr: ie172(active, since 3M), standbys: ie173, ie171
mds: 1/1 daemons up, 2 standby
osd: 3 osds: 3 up (since 6d), 3 in (since 6d)

data:
volumes: 1/1 healthy
pools: 4 pools, 209 pgs
objects: 102.44k objects, 400 GiB
usage: 1.2 TiB used, 4.1 TiB / 5.2 TiB avail
pgs: 209 active+clean

io:
client: 0 B/s rd, 3.4 KiB/s wr, 0 op/s rd, 0 op/s wr
 
Last edited:
Maybe you need to set --format raw as an additional Parameter.

Another way would be to write the disk in its current storage directly into the VM Config and then just trigger the move using the GUI. Then you have it as a task in the background.
I tried both but none is working for me, is there any other method that I can try ?
 
I tried both but none is working for me, is there any other method that I can try ?
I edited config file as


boot: order=virtio0;ide2;net0
cores: 8
cpu: qemu64
ide2: none,media=cdrom
memory: 8012
meta: creation-qemu=8.0.2,ctime=1702884730
name: kib-166
net0: virtio=7E:6F:98:5F:8C:BB,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=4dfa8d67-02d6-42d5-a0f6-8574d7a6a272
sockets: 1
virtio0: HDD-1TB-MIR:vm-102-disk-0,iothread=1,size=32G
virtio1: HDD-1TB-MIR:166_rhel7.qcow2,iothread=1,size=1000G
vmgenid: e4a75816-7ea9-4ad5-81c9-f7a625d63f69


But it is not reflected in GUI. What is missing from my side.
 
Actually you should be able to see this in the VM overview. Try it without .qcow2. Otherwise, please do an ls -la on the contents of the storage.
 
boot: order=virtio1;ide2;net0
cores: 8
cpu: qemu64
ide2: none,media=cdrom
memory: 8012
meta: creation-qemu=8.0.2,ctime=1702884730
name: kib-166
net0: virtio=7E:6F:98:5F:8C:BB,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsihw: virtio-scsi-pci
smbios1: uuid=4dfa8d67-02d6-42d5-a0f6-8574d7a6a272
sockets: 1
unused0: HDD-1TB-MIR:vm-102-disk-1
virtio0: HDD-1TB-MIR:vm-102-disk-0,iothread=1,size=32G
virtio1: HDD-1TB-MIR:kib-mac-dash-perf_166_rhel7.qcow2,iothread=1,size=1000G
vmgenid: e4a75816-7ea9-4ad5-81c9-f7a625d63f69



Now i can see disk in GUI but when I connect it and change bootloder it shows me error as " unable to parse zfs volume name " .
 
Now i can see disk in GUI but when I connect it and change bootloder it shows me error as " unable to parse zfs volume name " .
You should mount the virtual hard drive from the current source in the config, then use the GUI to move it to the correct target storage and only then start it. I suspect that your error comes from the image being named incorrectly.

It's always better to take log excerpts or screenshots, then you can see exactly what you're talking about and where it is.