Would it then be better to first upgrade to the newest ceph version (I think v15 is also supported on proxmox right now) and then create all OSDs new one by one ?
I am not sure which changes are in v15, but if there are also default disk layout changes then this way we wouldn't have to do it...
I initially also thought that there were partition table inconsistencies, but that's why I rebooted the node, thinking that it would fix it since it has to reread the tables again.
The cluster was initially set up with proxmox 5 (including ceph) and we upgraded it last year (including ceph)...
I am trying to create an OSD on one of our nodes in our 4 node cluster and I am getting this error:
command 'ceph-volume lvm create --cluster-fsid e9f42f14-bed0-4839-894b-0ca3e598320e --block.db '' --data /dev/sdi' failed: exit code 1
System state before trying to create the OSD (via the...
we already have a 4 node proxmox cluster running ceph and are thinking about expanding it.
We are trying to reevaluate our hardware choices by observing the performance of our current cluster and are now trying to find out how much the WAL and DB is used on our system.
Each node has one...
Yesterday I didn't see that the syslog entries were from pvedaemon, not pvestatd. So I restarted both after patching the perl script and now it works. CephFS is mounted and can be used via the webinterface.
@Alwin, I assume once the fix is released on your repos my quick and dirty fix (i only...
I got rid of the mount error by patching the perl script on all nodes (but I had limited the storage to only pve1 on my first try).
The mount succeeded and I can see the content of the cephfs in /mnt/pve/cephfs, but the webinterface and syslog errors are the same as before.
The error in the...
@Alwin, I found a bug in the PVE/Storage/CephTools.pm perl script. It still doesn't work, but at least I get a different error now.
This is the fix:
+++ /usr/share/perl5/PVE/Storage/CephTools.pm 2019-01-07 16:31:05.170790597 +0100
+++ /usr/share/perl5/PVE/Storage/CephTools.pm 2019-01-07...
Yes, after adding cephfs via the webinterface the mount folder was created and contained the data that is on cephfs, so the mount was successfull. But pvestatd still spammed the syslog with the mount errors and the webinterface showed the grey question mark.
This is were I got the idea from to set the machine type specifically. So I ran this command:
qm set 109 -machine pc-i440fx-2.11
I have now removed the config entry specifying the machine type again and stopped and started the ESXi VM again but the behaviour hasn't changed.
Here is the complete...
We have a 4 node proxmox cluster (5.3) where we have a nested ESXi (mostly for migrating old vmware VMs to ceph).
On this nested ESXi a Windows 10 VM (uefi) starts and runs fine.
A Debian 9.3 VM runs and starts fine too.
But a Debian 9.4 VM does not start (when booting a 9.4 netinstall you...
We have a 4 node proxmox cluster that I just updated to proxmox 5.3 (from 5.2) without any problems.
Now I want to test the new CephFS support in Proxmox 5.3 , but after I add it via the storage menu in the webinterface the cephfs storage entry only has a grey question mark on it.