Hello everyone,
I would like to ask some help with some issue I have moving a virtual machine disk from local-lvm storage to a newly created Ceph pool. Despite extensive research and troubleshooting, I keep encountering the following error:
Interestingly, running the following command from the command line works fine, using the same parameters that fail during the move:
Permissions and ceph conf:
Here's the background:
The issue persists after couple of re-installation tries, many forum searched and couple of night spent on it.
Can anybodoy please help?
Thanks in advance.
I would like to ask some help with some issue I have moving a virtual machine disk from local-lvm storage to a newly created Ceph pool. Despite extensive research and troubleshooting, I keep encountering the following error:
Code:
create full clone of drive ide0 (local-lvm:vm-100-disk-0)
drive mirror is starting for drive-ide0
drive-ide0: Cancelling block job
drive-ide0: Done.
Removing image: 1% complete...
[...]
Removing image: 100% complete...done.
TASK ERROR: storage migration failed: mirroring error: VM 100 qmp command 'drive-mirror' failed - Could not open 'rbd:pool_vm/vm-100-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/pool_vm.keyring': No such file or directory
Interestingly, running the following command from the command line works fine, using the same parameters that fail during the move:
Code:
root@proxmox1:~# rbd info pool_vm/test-disk --conf /etc/pve/ceph.conf --id admin --keyring /etc/pve/priv/ceph/pool_vm.keyring
rbd image 'test-disk':
size 1 GiB in 256 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 2478f7de3df2d
block_name_prefix: rbd_data.2478f7de3df2d
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Sat Jul 6 10:52:32 2024
access_timestamp: Sat Jul 6 10:52:32 2024
modify_timestamp: Sat Jul 6 10:52:32 2024
Permissions and ceph conf:
Code:
root@proxmox1:~# ls -l /etc/pve/ceph.conf /etc/pve/priv/ceph/
-rw-r----- 1 root www-data 628 Jul 5 23:17 /etc/pve/ceph.conf
/etc/pve/priv/ceph/:
total 1
-rw------- 1 root www-data 151 Jul 5 23:18 pool_k8s.keyring
-rw------- 1 root www-data 151 Jul 6 10:55 pool_vm.keyring
root@proxmox1:~# cat /etc/pve/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 10.0.0.11/16
fsid = 0e6f5a25-287d-41f7-a995-cf63f6a386fb
mon_allow_pool_delete = true
mon_host = 10.0.0.11 10.0.0.13 10.0.0.12
ms_bind_ipv4 = true
ms_bind_ipv6 = false
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 10.0.0.11/16
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[client.crash]
keyring = /etc/pve/ceph/$cluster.$name.keyring
[mon.proxmox1]
public_addr = 10.0.0.11
[mon.proxmox2]
public_addr = 10.0.0.12
[mon.proxmox3]
public_addr = 10.0.0.13
Here's the background:
- Fresh installation with three new SSDs.
- Ceph works perfectly for creating VMs directly on the pool
- Moving disks from Ceph pool to local LVM works without issues.
- I managed to move a disk successfully a couple of times initially, somehow I broke it
- I can restore VM's dirctly on Ceph pool, it seems to fail only the migration
The issue persists after couple of re-installation tries, many forum searched and couple of night spent on it.
Can anybodoy please help?
Thanks in advance.