Hi,
when changing the ceph monitors via the web gui in our proxmox cluster I noticed that the settings in /etc/pve/storage.cfg did not update, while the ones in /etc/pve/ceph.cfg did.
This led to the situation that VMs could not be started (and migrated online) anymore, as the -drive parameter in the kvm call contained the old ceph monitors instead of the new ones, e.g.
instead of
Updating storage.cfg with the changed monhosts did remediate the situation. This occurred on proxmox 4.4 with ceph hammer. Is this a general bug or does it indicate a problem with our cluster?
when changing the ceph monitors via the web gui in our proxmox cluster I noticed that the settings in /etc/pve/storage.cfg did not update, while the ones in /etc/pve/ceph.cfg did.
This led to the situation that VMs could not be started (and migrated online) anymore, as the -drive parameter in the kvm call contained the old ceph monitors instead of the new ones, e.g.
Code:
-drive file=rbd:rbd/vm-109-disk-1:mon_host=172.18.117.66;172.18.117.67:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/idivceph0.keyring,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on
instead of
Code:
-drive file=rbd:rbd/vm-109-disk-1:mon_host=172.18.117.68;172.18.117.69;172.18.117.93:id=admin:auth_supported=cephx:keyring=/etc/pve/priv/ceph/idivceph0.keyring,if=none,id=drive-virtio0,format=raw,cache=none,aio=native,detect-zeroes=on
Updating storage.cfg with the changed monhosts did remediate the situation. This occurred on proxmox 4.4 with ceph hammer. Is this a general bug or does it indicate a problem with our cluster?