My ceph cluster shows all is healthy and my VMs on seperate servers that are running on ceph mounts are up but I'm unable to migrate from one host to another.
When I attempt to migrate I get this message.
Looking at the logs I'm seeing indications of a config issue.
This is the contents of the file mentioned in the journal but I don't see anything that stands out.
When I attempt to migrate I get this message.
2020-03-27 09:10:32 starting migration of VM 110 to node 'proxmox-ceph-1' (10.237.195.4)
Job for mnt-pve-cephfs.mount failed.
2020-03-27 09:10:32 ERROR: Failed to sync data - mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details.
2020-03-27 09:10:32 aborting phase 1 - cleanup resources
2020-03-27 09:10:32 ERROR: migration aborted (duration 00:00:00): Failed to sync data - mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details.
TASK ERROR: migration aborted
Looking at the logs I'm seeing indications of a config issue.
root@proxmox-compute-1:~# systemctl status mnt-pve-cephfs.mount
● mnt-pve-cephfs.mount - /mnt/pve/cephfs
Loaded: loaded (/run/systemd/system/mnt-pve-cephfs.mount; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-03-27 09:19:07 CDT; 1s ago
Where: /mnt/pve/cephfs
What: 192.168.0.4,192.168.0.5,192.168.0.6:/
Mar 27 09:19:07 proxmox-compute-1 systemd[1]: Mounting /mnt/pve/cephfs...
Mar 27 09:19:07 proxmox-compute-1 mount[13149]: mount error 22 = Invalid argument
Mar 27 09:19:07 proxmox-compute-1 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=22/n/a
Mar 27 09:19:07 proxmox-compute-1 systemd[1]: mnt-pve-cephfs.mount: Failed with result 'exit-code'.
Mar 27 09:19:07 proxmox-compute-1 systemd[1]: Failed to mount /mnt/pve/cephfs.
root@proxmox-compute-1:~# journalctl -xe
-- The job identifier is 4168 and the job result is failed.
Mar 27 09:19:17 proxmox-compute-1 pvestatd[1817]: mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details.
Mar 27 09:19:17 proxmox-compute-1 kernel: libceph: bad option at 'conf=/etc/pve/ceph.conf'
Mar 27 09:19:27 proxmox-compute-1 systemd[1]: Reloading.
Mar 27 09:19:27 proxmox-compute-1 systemd[1]: Mounting /mnt/pve/cephfs...
-- Subject: A start job for unit mnt-pve-cephfs.mount has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit mnt-pve-cephfs.mount has begun execution.
--
-- The job identifier is 4181.
Mar 27 09:19:27 proxmox-compute-1 mount[13299]: mount error 22 = Invalid argument
Mar 27 09:19:27 proxmox-compute-1 systemd[1]: mnt-pve-cephfs.mount: Mount process exited, code=exited, status=22/n/a
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- An n/a= process belonging to unit mnt-pve-cephfs.mount has exited.
--
-- The process' exit code is 'exited' and its exit status is 22.
Mar 27 09:19:27 proxmox-compute-1 systemd[1]: mnt-pve-cephfs.mount: Failed with result 'exit-code'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit mnt-pve-cephfs.mount has entered the 'failed' state with result 'exit-code'.
Mar 27 09:19:27 proxmox-compute-1 systemd[1]: Failed to mount /mnt/pve/cephfs.
-- Subject: A start job for unit mnt-pve-cephfs.mount has failed
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit mnt-pve-cephfs.mount has finished with a failure.
--
-- The job identifier is 4181 and the job result is failed.
Mar 27 09:19:27 proxmox-compute-1 kernel: libceph: bad option at 'conf=/etc/pve/ceph.conf'
Mar 27 09:19:27 proxmox-compute-1 pvestatd[1817]: mount error: See "systemctl status mnt-pve-cephfs.mount" and "journalctl -xe" for details.
This is the contents of the file mentioned in the journal but I don't see anything that stands out.
root@proxmox-compute-1:~# cat /etc/pve/ceph.conf
[global]
auth_client_required = cephx
auth_cluster_required = cephx
auth_service_required = cephx
cluster_network = 192.168.0.4/24
fsid = f8d6430f-0df8-4ec5-b78a-d8956832b0de
mon_allow_pool_delete = true
mon_host = 192.168.0.4 192.168.0.5 192.168.0.6
osd_pool_default_min_size = 2
osd_pool_default_size = 3
public_network = 192.168.0.4/24
[client]
keyring = /etc/pve/priv/$cluster.$name.keyring
[mds]
keyring = /var/lib/ceph/mds/ceph-$id/keyring
[mds.proxmox-ceph-2]
host = proxmox-ceph-2
mds_standby_for_name = pve
[mds.proxmox-ceph-1]
host = proxmox-ceph-1
mds_standby_for_name = pve
[mds.proxmox-ceph-3]
host = proxmox-ceph-3
mds standby for name = pve
Last edited: